⚙️
API Docs
  • Welcome!
    • About WaDE
  • API Reference
    • Water Rights Data API End-point
    • Site-Specific Time Series Data
    • Aggregate Water Budget Time Series Data
  • FAQ
  • Contact
Powered by GitBook
On this page
  1. API Reference

Site-Specific Time Series Data

PreviousWater Rights Data API End-pointNextAggregate Water Budget Time Series Data

Last updated 10 months ago

Access Token. Please get in touch with the WaDE Team at WaDE_WSWC@hotmail.com to get an API access token. The token protects the API from bots and attacks. The WaDE team plans to support a personalized token for each user as part of the WesDAAT user management system. We apologize for the inconvenience.

Retrieves historical time series data for two types of data in WaDE for a site by passing in the appropriate options; you can search for available inventory in the system.

  1. Reservoirs and Observation Site Time Series Water Data Reservoir and Observation Site data refer to the historic water flows or volumes by gage stations that States operate year-round or seasonally. This data is separate from streamflow information gathered through federal agencies such as USGS or the U.S. Bureau of Reclamation. Knowing river and reservoir status can help with long-term planning and flood forecasting modeling. Two relative measurements often used are discharge and gage height. Discharge is the rate at which a volume of water passes by a particular location. Gage height is a measurement of the distance between the water's surface above the stream gage's datum reference point (i.e., a unique reference point used to accommodate for changes in a stream channel over time). Reservoirs and Observation Site Water Recordings demo:

  2. Public Supply Time Series Data - (Water Use) Refers to the reported deliveries to urban water service areas, known as the place of use (i.e., farmstead, municipal city boundary, residential use, etc.), along with the water source intake locations, known as point of diversion (i.e., river, canal, groundwater, etc.). Put another way, water is withdrawn from site-specific locations and then applied (i.e., depleted) at a different location. Site-Specific Public Supply Water use demo:

Here is the API endpoint and its parameters

Click on the arrow next to the "response" block n the link below to see the API schema and an example response.

Example Python code for using the API

# Needed Libararies
!pip install numpy
!pip install pandas
!pip install openpyxl
import openpyxl
import os
import numpy as np
import pandas as pd
import requests
import json

pd.set_option('display.max_columns', 999)  # How to display all columns of a Pandas DataFrame in Jupyter Notebook
print ("Libararies have been imported")
# Create output dataframes to store each dictionary section of the API return.
Organizations = pd.DataFrame()
WaterSources = pd.DataFrame()
VariableSpecifics = pd.DataFrame()
Methods = pd.DataFrame()
BeneficialUses = pd.DataFrame()
SiteVariableAmounts = pd.DataFrame()
Sites = pd.DataFrame()
print("done")
# Pre-set base elements of API call for easy construction.
# Save as strings.
# uat connection

# base API URL.
base_url_API = "https://wade-api.azure-api.net/v1/SiteVariableAmounts?" 

# set start & end dates of interest in MM/DD/YYYY format.
param1 = "StartDate=01/01/2000&EndDate=01/01/2012"

# limit search to state of interest, use abbreviation.
param2 = "State=UT"

# security API Key.
key = "key=38f422d1ada34907a91aff4532fa4669"

print("done")
# loop over the API calls through pages (index) one-at-a-time. 

StartIndex = 0  # will do bounds of 1000
loopCounter = 0  # counter for the number of loops we want


# The loop
for loopCounter in range (30):

    StartIndex_param = "StartIndex=" + str(StartIndex)
    print (StartIndex_param)
    
    # combine the API parameters together
    callString = base_url_API + param1 + "&" + param2 + "&" + StartIndex_param + "&" + key
    print (callString)
    
    # Call the API
    # check if API has a response
    try: 
        # The get
        response_dict = requests.get(callString).json()
        
        # Indexing and list slicing to append to individual tables.
        # Organizations Data       
        o_json_data = response_dict['Organizations'][0]
        o_tempDF = pd.json_normalize(o_json_data)
        Organizations = pd.DataFrame(o_tempDF, columns=['OrganizationName',
                                                        'OrganizationPurview',
                                                        'OrganizationWebsite',
                                                        'OrganizationState',
                                                        'OrganizationContactEmail',
                                                        'OrganizationPhoneNumber',
                                                        'OrganizationContactName',
                                                        'OrganizationContactEmail'], index=[0])

        # WaterSource Data
        ws_json_data = response_dict['Organizations'][0]['WaterSources']
        ws_tempDF = pd.json_normalize(ws_json_data)
        WaterSources = WaterSources.append(ws_tempDF)
        
        # VariableSpecifics Data
        v_json_data = response_dict['Organizations'][0]['VariableSpecifics']
        v_tempDF = pd.json_normalize(v_json_data)
        VariableSpecifics = VariableSpecifics.append(v_tempDF)
         
        # Methods Data
        m_json_data = response_dict['Organizations'][0]['Methods']
        m_tempDF = pd.json_normalize(m_json_data)
        Methods = Methods.append(m_tempDF)
        
        # BeneficialUses Data
        bu_json_data = response_dict['Organizations'][0]['BeneficialUses']
        bu_tempDF = pd.json_normalize(bu_json_data)
        BeneficialUses = BeneficialUses.append(bu_tempDF)
        
        # SiteVariableAmounts Data
        ssa_json_data = response_dict['Organizations'][0]['SiteVariableAmounts']
        ssa_tempDF = pd.json_normalize(ssa_json_data)
        ssa_tempDF['StartIndex'] = str(StartIndex) #tracking StartIndex used
        SiteVariableAmounts = SiteVariableAmounts.append(ssa_tempDF)
        
        # Sites Data
        s_json_data = response_dict['Organizations'][0]['Sites']
        s_tempDF = pd.json_normalize(s_json_data)
        Sites = Sites.append(s_tempDF)
          
    except:
        print("StartIndex_param has no data")
        
    
    StartIndex = StartIndex + 1000
    
    print("------------")
    
WaterSources = WaterSources.drop_duplicates().sort_values(by=['WaterSourceUUID']).reset_index(drop=True)
VariableSpecifics = VariableSpecifics.drop_duplicates().reset_index(drop=True)
Methods = Methods.drop_duplicates().sort_values(by=['MethodUUID']).reset_index(drop=True)
BeneficialUses = BeneficialUses.drop_duplicates().sort_values(by=['Term']).reset_index(drop=True)
Sites = Sites.drop_duplicates(subset=['SiteUUID']).sort_values(by=['SiteUUID']).reset_index(drop=True)

print("done")

Retrieve related POUs & PODS site info from sites dataframe, create new dataframe

  • Creating easy to ready table for related POD sites per POU site from RelatedPODSites and RelatedPOUSites fields.

  • Create temp POU table, create temp POD table, then concatenate into single output table.

# Create new POU specific dataframe, populate with items from Sites dataframe
RelatedSitesPOU = pd.DataFrame()
RelatedSitesPOU['SourceSiteUUID'] = Sites['SiteUUID']
RelatedSitesPOU['SourcePODorPOU'] = Sites['PODorPOUSite']
RelatedSitesPOU['RelatedPODSites'] = Sites['RelatedPODSites']
RelatedSitesPOU = RelatedSitesPOU[RelatedSitesPOU['SourcePODorPOU'] == 'POU'].reset_index(drop=True)

try:
    # explode the entries in the RelatedPODSites column, reset index.
    RelatedSitesPOU = RelatedSitesPOU.assign(RelatedPODSites=RelatedSitesPOU['RelatedPODSites']).explode('RelatedPODSites').reset_index(drop=True)

    # fetch column RelatedPODSites as a Series, then return dataframe where the column labels are the keys of the dictionaries.
    RelatedSitesPOU = pd.concat([RelatedSitesPOU, RelatedSitesPOU["RelatedPODSites"].apply(pd.Series)], axis=1)

    # rename columns
    RelatedSitesPOU = RelatedSitesPOU.rename(columns={"SiteUUID": "RelatedSiteUUID"})

    # drop index & RelatedPODSites columns from dataframe (no longer needed).
    RelatedSitesPOU = RelatedSitesPOU.drop(columns=["RelatedPODSites", 0])
    
except:
    print("No POU sites for the returend SiteVariableAmounts records.")
    RelatedSitesPOU = pd.DataFrame()
    RelatedSitesPOU = RelatedSitesPOU.append(pd.Series(), ignore_index=True)


# drop null rows
RelatedSitesPOU = RelatedSitesPOU.dropna()

print(len(RelatedSitesPOU))
RelatedSitesPOU.head(1)

# Create new POD specific dataframe, populate with items from Sites dataframe
RelatedSitesPOD = pd.DataFrame()
RelatedSitesPOD['SourceSiteUUID'] = Sites['SiteUUID']
RelatedSitesPOD['SourcePODorPOU'] = Sites['PODorPOUSite']
RelatedSitesPOD['RelatedPOUSites'] = Sites['RelatedPOUSites']
RelatedSitesPOD = RelatedSitesPOD[RelatedSitesPOD['SourcePODorPOU'] == 'POD'].reset_index(drop=True)

try: 
    # explode the entries in the RelatedPOUSites column, reset index.
    RelatedSitesPOD = RelatedSitesPOD.assign(RelatedPOUSites=RelatedSitesPOD['RelatedPOUSites']).explode('RelatedPOUSites').reset_index(drop=True)

    # fetch column RelatedPOUSites as a Series, then return dataframe where the column labels are the keys of the dictionaries.
    RelatedSitesPOD = pd.concat([RelatedSitesPOD, RelatedSitesPOD["RelatedPOUSites"].apply(pd.Series)], axis=1)

    # rename columns
    RelatedSitesPOD = RelatedSitesPOD.rename(columns={"SiteUUID": "RelatedSiteUUID"})

    # drop index & RelatedPOUSites columns from dataframe (no longer needed).
    RelatedSitesPOD = RelatedSitesPOD.drop(columns=["RelatedPOUSites", 0])
    
except:
    print("No POD sites for the returend SiteVariableAmounts records.")
    RelatedSitesPOD = pd.DataFrame()
    RelatedSitesPOD = RelatedSitesPOD.append(pd.Series(), ignore_index=True)
    

# drop null rows
RelatedSitesPOD = RelatedSitesPOD.dropna()

print(len(RelatedSitesPOD))
RelatedSitesPOD.head(1)

# Concatenate POU with POD data
try:
    frames = [RelatedSitesPOU, RelatedSitesPOD]
    RelatedSites = pd.concat(frames).reset_index(drop=True)
    RelatedSites = RelatedSites.sort_values(by=['SourcePODorPOU', 'SourceSiteUUID', 'RelatedSiteUUID', 'StartDate', 'EndDate'])
    print(len(RelatedSites))
    RelatedSites.head(1)
except:
    print("No POD to POU relations in the data.")
    RelatedSites = pd.DataFrame()

with pd.ExcelWriter('utssps_APITest_StartIndex.xlsx') as writer:
    Organizations.to_excel(writer, sheet_name='Organizations')
    WaterSources.to_excel(writer, sheet_name='WaterSources')
    VariableSpecifics.to_excel(writer, sheet_name='VariableSpecifics')
    Methods.to_excel(writer, sheet_name='Methods')
    BeneficialUses.to_excel(writer, sheet_name='BeneficialUses')
    SiteVariableAmounts.to_excel(writer, sheet_name='SiteVariableAmounts')
    Sites.to_excel(writer, sheet_name='Sites')
    RelatedSites.to_excel(writer, sheet_name='RelatedSites')

print("done")
https://waterdataexchangewswc.shinyapps.io/SiteSpecificReservoirAndObservationSiteDemo/
https://waterdataexchangewswc.shinyapps.io/AggregatedBudgetWaterUseDemo/

retrieves time series data for given site and specific variable for a site

get

By passing in the appropriate options, you can search for available inventory in the system

Query parameters
SiteUUIDstringOptional

Search based on a univeral Site identifer across all the states in WaDE

SiteTypeCVstringOptional

Search based on a controlled Site type identifer across all the states in WaDE

VariableCVstring · stringOptional

pass an optional search string for looking up inventory

VariableSpecificCVstring · stringOptional

number of records to skip for pagination

BeneficialUseCVstringOptional

Search for site specific data based on a beneficial Use category (primary?) (i.e. Irrigation) as defined by each state

USGSCategoryNameCVstringOptional

a USGS water use category from the USGS controlled vocabulary (e.g. irrigation, groundwater, fresh)

StartDatestringOptional

Search for site specific data with StartDate later than this end date DD/MM/YYYY (e.g., . Leaving it empty would return all data to the earliest date in the database

EndDatestringOptional

Search for site specific with EndDate date earlier than this end date DD/MM/YYYY (e.g., . Leaving it empty would return all data to the last TimeframeEndDate date in the database

SearchBoundarystringOptional

Search for water allocations within a geo-spatial boundary within or across state boarders (e.g., Bear River watershed)

HUC8stringOptional
HUC12stringOptional
CountystringOptional
StatestringOptional

Two letter state abbreviation.

StartIndexintegerOptional

0-based index for where to start querying for records. Defaults to 0 if not specified.

RecordCountinteger · min: 1 · max: 10000Optional

Number of records to retrieve. Defaults to 1000 if not specified.

Responses
200
search results matching criteria
application/json
400
bad input parameter
get
GET /v1/SiteVariableAmounts HTTP/1.1
Host: wade-api.azure-api.net
Accept: */*
{
  "TotalSiteVariableAmountsCount": 1,
  "Organizations": [
    {
      "OrganizationName": "UTDWR",
      "OrganizationPurview": "Water rights administration, water planning, basin planning, water quality",
      "OrganizationWebsite": "https://waterrights.utah.gov/",
      "OrganizationPhoneNumber": "801-538-7240",
      "OrganizationContactName": "Craig Miller",
      "OrganizationContactEmail": "craigmiller@utah.gov",
      "State": "Utah",
      "VariableSpecifics": [
        {
          "VariableSpecificTypeCV": "SiteSpecificWithdrawal",
          "VariableCV": "SiteSpecificConsumptiveUse",
          "AmountUnitCV": "cfs",
          "AggregationStatisticCV": "average",
          "AggregationInterval": "1",
          "AggregationIntervalUnitCV": "month",
          "ReportYearStartMonth": "10/01",
          "ReportYearTypeCV": "irrigation year",
          "MaximumAmountUnitCV": "acre feet"
        }
      ],
      "Sites": [
        {
          "SiteName": "01-5567",
          "SiteUUID": "UT-01-5567",
          "NativeSiteID": "01-5567",
          "USGSSiteID": "01-5567",
          "SiteTypeCV": "01-5567",
          "Longitude": "-112.158743",
          "Latitude": "40.676523",
          "PODorPOUSite": "POD",
          "CoordinateMethodCV": "e.g., Digitized",
          "CoordinateAccuracy": "Very accurate +/- 1 ft",
          "WaterSources": [
            {
              "WaterSourceUUID": "123e4567-e89b-12d3-a456-426614174000",
              "WaterSourceTypeCV": "text"
            }
          ],
          "SiteGeometry": "POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))",
          "NHDNetworkStatusCV": "Y",
          "NHDProductCV": "NHD High Res.",
          "NHDUpdateDate": "2016-08-29T09:12:33.001Z",
          "NHDReachCode": "2867042",
          "NHDMeasureNumber": "35.40000332",
          "StateCV": "NHD High Res.",
          "County": "Salt Lake",
          "HUC8": "2025-06-11T15:55:58.409Z",
          "HUC12": "2025-06-11T15:55:58.409Z"
        }
      ],
      "WaterSources": [
        {
          "WaterSourceName": "text",
          "WaterSourceNativeID": "text",
          "WaterSourceUUID": "123e4567-e89b-12d3-a456-426614174000",
          "WaterSourceTypeCV": "text",
          "FreshSalineIndicatorCV": "text",
          "WaterSourceGeometry": "text"
        }
      ],
      "Methods": [
        {
          "MethodUUID": "UTDWRE-01",
          "MethodName": "UTDWRE Water Withdrawal Estimation",
          "MethodDescription": "A method for estimating water withdrawals by subarea",
          "MethodNEMILink": "https://www.nemi.gov/methods/method_summary/10002/",
          "ApplicableResourceType": "Groundwater and Surface Water",
          "MethodTypeCV": "Estimated",
          "DataCoverageValue": "85%",
          "DataQualityValue": "QC600",
          "DataConfidenceValue": "90%"
        }
      ],
      "BeneficialUses": [
        {
          "BeneficialUseCategoryCV": "text",
          "PrimaryUseCategoryCV": "text",
          "USGSCategoryNameCV": "text",
          "NAICSCodeNameCV": "text"
        }
      ],
      "SiteVariableAmounts": [
        {
          "SiteUUID": "01-5567",
          "SiteTypeCV": "01-5567",
          "Longitude": "-112.158743",
          "Latitude": "40.676523",
          "CoordinateMethodCV": "e.g., Digitized",
          "CoordinateAccuracy": "Very accurate +/- 1 ft",
          "VariableSpecificTypeCV": "Water Allocation All",
          "BeneficialUseCategoryCV": [
            {
              "BeneficialUseCategoryCV": "text"
            }
          ],
          "WaterSourceUUID": "UT-2210",
          "WaterSourceTypeCV": "Surface water",
          "MethodUUID": "text",
          "ReportYear": "text",
          "TimeframeStart": "01/01/1990",
          "TimeframeEnd": "12/31/1990",
          "Amount": 10,
          "AmountUnitCV": "cfs",
          "IrrigationMethodCV": "sprinkler",
          "IrrigatedAcreage": 100,
          "CropTypeCV": "Corn",
          "PopulationServed": 10000,
          "PowerGeneratedGWh": 5,
          "AllocationCommunityWaterSupplySystem": "Salt Lake City",
          "SDWISIdentifierCV": "Metropolitan Water District of Salt Lake and Sandy",
          "CustomerTypeCV": "Residential",
          "AssociatedNativeAllocationIDs": "Groundwater and Surface Water",
          "DataPublicationDate": "2025-06-11T15:55:58.409Z",
          "DataPublicationDOI": "text"
        }
      ]
    }
  ]
}
  • Here is the API endpoint and its parameters
  • GETretrieves time series data for given site and specific variable for a site
  • Example Python code for using the API
  • Retrieve related POUs & PODS site info from sites dataframe, create new dataframe