Git Product home page Git Product logo

soabm's Introduction

soabm's People

Contributors

albabnoor avatar bettinardi avatar binnympaul avatar bstabler avatar jfdman avatar khademul avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

soabm's Issues

Cleaning and Sorting ABM outputs

I was just looking over the output folder with Sam.
In that process we were noticing the current mess of files that the ABM “Build” configuration is producing. A number of notes about those output files.

  1. I don’t think we are going to want to explain all those output files in the user guide. I think we will want to prune it down to only final iteration files.
  2. Sam suggests “binning” the output into subdirectories (like CVM as an example). I think that idea could help make the output space more manageable.
  3. If you are going to consider Sam’s suggestions – you will likely want to think about that before doing too much work with the data processing scripts.

taz 343 assignment demand too high

Binny highlighted that we have extreme congestion around TAZ 343 (with one MAZ 34301, SEQMAZ – 431). ODOT has been investigated and agree that there are network issues.

However, there also appears to be something grossly incorrect with the demand. The MAZ that represents that TAZ has a total employment of 76 and has 52 households. In the AM (1.5 hour, 7-8:30) assignment period, that single MAZ has a total demand of ~2000 trips – way too high of an AM peak demand for that land use (it has about 4,000 in the PM 2 hour peak).

html too large

When doing a side by side comparison of two ABM runs, the html visualizer gets to be too large and internet browsers can no longer open and manage it effectively. As best as i can tell this is do to the link assignment comparison, which isn't that important. I imagine that currently all link points are loaded in, which is what slows it down and blows up the size. What I suggest is that we filter all the links when doing a ABM side by side run so that only links with a volume of above say, 100 by period and a 1000 by daily show up in the comparison. There would have to be a little extra logic, because the code will need to build a filter link set for each scenario and then find all the values for the total link references across both scenarios that met the criteria. So a link in one scenario might be 200, but 95 in the reference, we would still want that link information even though it was below the 100 threshold in the reference.

Related, on the welcome page is a large shapefile. Currently the compare to OHAS is a 20mb html (hard to open) and the ABM to ABM html is 40MB. I believe a lot of that 20MB in the OHAS comparison is the welcome shapefile. We should do a test with the shapefile aspect commented out of the html build and compare the file size. If the size and speed of the html is greatly improved than we should consider either keeping the shapefile out (to a later point when we actually develop some functionality with the shapefile), or have a greatly simplified html, with something like district shapes as opposed to MAZs.

Correct dependencies.zip

When you unzip the dependencies.zip file, the resulting folder starts with an upper case "D", please correct so that everything is consistent (with a lower case "d").

Add explanation of work segment codes in user guide

work segment codes in the wsLoc.csv output file mean the following:
-1: Not a worker
0 : Management
1 : White Collar
2 : Blue Collar
3 : Sales and..
4 : Natural..
5: Production,..
99999 : Works from home

update vdf data prep settings

  • @bettinardi - add a potential future area type adjustment in the mid-link capacity calculation

add a link attribute called “AREATYPE_CAP_ADJ” (standing for Area Type Capacity Adjustment). For now that value will be zero, but the idea will be that treatments could be applied to adjust the mid-link capacities by area type. As an example, we have been planning to develop different capacities for downtown mixed use areas with heavy ped crossings. While we don’t have those adjustments worked out now – we want a built in way to apply those. So please have the capacities applied from the look up table, but then have the process subtract out the adjustment in the link field (which will be set at zero for now).

  • update cycle lengths and capacities based on DKS' review

Cleaner ABM end state

If I'm looking at the ABM correctly, the end files that a user of the ABM will most typically go to after a run are outputs/networks/taz_skim_period_speed.ver

This file naming means something to modeling steps / process, but the file name means nothing to the end user and is not intuitive at all.

I would like to suggest that the file name get changed to something like - "FinalHighwayAssignment_Period.ver".

For the Transit files, "tap_skim_period_setX.ver". It would be helpful to change the file naming to "FinalTransitAssignment_Period_SetX.ver". Maybe since all these files would start with "Final" we could drop the final. But basically we need a file name that makes a little more sense than the current. Skims in the file name does not represent why the user would go to that file, it just represents what the model flow uses the file for - skimming. But we need to make the model name meaningful to the user, not the coder.

ABM Fails need to Fail

The ABM is currently failing in non-critical ways. This makes it unclear if the model has actually run or not. In many cases many logs and outputs have populated, so someone new can easily mistake a failed run as a successful run that just ran really quickly. The ABM setup needs to be updated so that a crticial fail means are hard stop in the model sequence - ideally with a nice clear notice to the user that something went wrong.

Currently in the setup, there is a critical fail in the ABM, and the setup continues through 3 iterations of skimming in Visum and failed ABM runs...

clean up after a run

delete files such as earlier CT-RAMP iteration outputs, intermediate truck trip matrices, parking cost debug files, etc. Maybe just delete the entire "other" folder? Let's plan to do this after @bettinardi reviews the next iteration of the model setup.

Copy capacity solution

The user of the ABM is going to want a quick way to visualize the V/C ratio for a given ABM period. The SOABM's vdf process needs to put an effective capacity solution into the capprt field so that Visum's standard v/c outputs/fields can be used by the ABM operator. If we don't do this, the ABM will not be as useful as it should be, and the user guide will have to have additional warnings on not trusting any information regarding to or related to the capacity field, capprt. It is much more desirable to just have the capprt populated with an effective capacity solution for each link as the vdf process runs and calculates the different elements of the vdf.

Proper way to add Time Profiles

The ABM needs all transit lines to have time profiles. In some cases those time profiles can be removed, but the code assumes that they are always present. The code either needs to be updated to auto generate time profiles, or the user guide needs to be updated with something like:

A page that states that Time Profiles are required to run the model. They should be auto created whenever adding a new line, but in some cases maybe accidentally deleted.

The user needs to always ensure that they exist, and if they don’t here are the quick easy steps to add them (Binny can add his steps).

Visum 18

Visum 18 will be out in Oct, and TPAU is preparing to update their machines from 16 to 18. ODOT needs to coordinate a time with RSG to make the switch for the ABM setup.

add park and ride flag to taps

@bettinardi

We have gone through and identified the TAPs with formal park and rides in the SO-ABM area, they are:
41, 60, 181, 201, 240, 351, 481

While Ben is updating the ABM setup and user guide – perhaps he can quickly add this new PnR field in to the TAP attribution (I am not immediately remembering where that should occur, Ben could probably do it much faster).

Revise ABM skimming procedure

Currently my understanding is that the ABM is configured to use "profile" speeds for skimming for all iterations. On 6-14-17's call we identified that in the final release of the ABM (after calibration) only the first iteration should be doing a "warm start" with profile speeds. After that it should switch to coded free flow speeds. Additionally, for future scenarios we won't want to maintain period specific profile speeds for each link, so another option will need to be built in, where instead of using profile speeds from the network, or free flow, there will be a lookup table of profile speeds by a series of settings (dimensions; time-of-time, speed, FC, lanes...), and the code will need to use that look-up table for skimming speeds for the first iteration, and then the standard speed field with assignment for all following...

Please ask @jfdman or @bettinardi with any questions.

lookup table of link profile speeds

@bettinardi - for future scenarios we won't want to maintain period specific profile speeds for each link, so another option will need to be built in, where instead of using profile speeds from the network, or free flow, there will be a lookup table of profile speeds by a series of settings (dimensions; time-of-time, speed, FC, lanes...), and the code will need to use that look-up table for skimming speeds for the first iteration, and then the standard speed field with assignment for all following...

post-process synpop person table to add a university students field

Part 1 - @DDudich

#Scripts Adds Major university column to persons file as well as fixing Null value in Standard Occupation Classification (SOC) column.

x <- read.csv("persons.csv",as.is=T)
x$soc <- as.numeric(gsub("NUL",0,x$soc))
x$majoruni <-0
x<- x[order(x[,30],x[,10]),c(1:23,31,24:30)]
x$PERID <- 1:nrow(x)
write.csv(x,"persons_sorted_uni.csv",row.names=F)

rename MAZ TAZID field

when exporting the MAZ TAZID field, the code renames it to TAZ. Let's rename the field in VISUM so we don't need to rename it on-the-fly.

Remove FACTYPE and cleanup some other network attributes

As soon as possible, the link field - FACTYPE needs to be removed from the version file and user guide. ODOT maintains the Visum defined attribute "PLANNO" for FC, not the UDA "FACTYPE". FACTYPE needs to be cleared from the version file and user guide to prevent future confusion. In general, the link and zone attributes need to be brought down to only the fields used by the model (or defined in the user guide - like PLANNO), to minimize error.

Ensuring the log files are meaningful

We need to re-look at the logging process in ABM and verify that logs that are being reported are important. Or, ODOT needs a way to identify when an important log is generated and called out so that it is not buried in an endless log file document.

add automatic dll copier

It would be ideal if the vdf dll and bmp could be added to the congfig\visum folder. Then at the beginning of the run a script could copy those files over to the appdata folders for all the users with accounts on the computer: C:\Users\tdb205\AppData\Roaming\PTV Vision\PTV Visum 16\UserVDF-DLLs

That way the setup can ensure that when a new user runs the ABM or an old user moves the ABM to a new computer - everything is current and setup correctly.

Related to this, I would like the prep steps for the SWIM external model broken out from the actual SWIM external model. We have had some improvements in the SWIM external model, and I would like to bring them into the ABM. If the setup steps for the SWIM external model were a prep step, then I could easily add this copy step into the R script, and then update the SWIM external model as a stand alone script as it should be, as it is it's own model with it's own update cycle separate from the ABM.

TOD_SPEED_FACTORS change to user input

Currently the Visum version file has 5 "network" inputs that are blank. They are holding locations for the code to update with 5 factors, one for each TOD period (example, the AM period is 1.5 hours long, so the factor is 1.5 meaning that capacity is multiplied up by 1.5 to represent capacity for the period).

Currently the factors are stored in code, and then input into Visum during the model run. However, the factors are static and therefore inputs, so the process needs to be updated so that the factors become inputs stored within the Visum input file and not stored with in code.

Adding 24 hour volume to PM version file

From my email:

I see that the “report_count_volumes” function that now runs at the end of the ABM complies the 5 periods into a daily.

This leads me to two requests:

  1. The PM will be our typical analysis period. Therefore, it will be the version file we go to 9 times out of 10. Please update the “report_count_volumes” function to:
    a. Produce two lists. One for volumes at count locations and one for daily volumes everywhere (all links). We will need volumes on all links to compare across scenarios, not just count locations.
    b. End the for loop on pm and load the 24 hour modeled volume for all links into the PM version file. That way we can see both the PM peak and daily result in the one version file.
    c. Update the function to save out volumes by period as well, so that we can do count validation by period as desired. We might not care about all the periods, but we will definitely care about PM peak in calibration, not just daily.
  2. As 1.c suggests, please add to the html count (volume for scenario comparison) validation by time period, so this will be a very important calibration measure.

Allow model to move forward with empty TAP lines

The model recently crashed when the file outputs\skims\tapLines.csv was created with blank records in the "LINES" column, meaning that those TAPs (with empty records) did not have a single accessible bus line. CT-RAMP crashes when that LINES field is empty.

The code needs to be updated so that the python code removes those empty TAPs as viable TAPs, and then writes those deleted TAPs to a file that it draws the users attention to, so that the user can go back later and investigate why the TAPs are not getting served and if they should be removed.

Add major university column to persons file

@DDudich's #Scripts Adds Major university column to persons file as well as fixing Null value in Standard Occupation Classification (SOC) column.

x <- read.csv("persons.csv",as.is=T)
x$soc <- as.numeric(gsub("NUL",0,x$soc))
x$majoruni <-0
x<- x[order(x[,30],x[,10]),c(1:23,31,24:30)]
x$PERID <- 1:nrow(x)
write.csv(x,"persons_sorted_uni.csv",row.names=F)

vdf double precision issue

@binnympaul

The intersection congestion adjustment in the VDF function (ODOTVDF.cpp) is computed as follows:

double int_cong_adj = 1 + para_a2 * pow((pcuvol / int_cap), para_b2);

I believe this should be:

double int_cong_adj = 1.0 + para_a2 * pow((pcuvol / int_cap), para_b2);

I guess the expression after the plus sign is getting integerized. I’m not very sure but the link travel times suggest that value of int_cong_adj is 1.0 for the cases that I checked.

Fail needs to fail

Currently the ABM keeps running even if it has a critical fail. The ABM run process needs to be updated so that if a fatal error occurs the model stops.

PCE to Vehicle Factoring

As ODOT implements PCE assignment in the ABM, the team needs to remember that final link volume attributes need to be saved back to the network as User Defined Attributes (UDA) that have been adjusted for the truck PCE factor. Each period will need an auto, truck, and total volume. In addition to a daily set of totals that will be summed up at the end of each period run and added to the PM file (similar to SWIM's operation).

As this is completed it is important that we remember to update the scripts that pull volume for the visualizer, as they will need to point to these new Volume counts as opposed to the PCE assignment.

Currently 3 different PCEs (for different truck classes are being discussed). If that ends up being the approach, then ABM will likely only have one truck PCE matrix. Therefore the code will have to calculate and report out (to a text file) the weighted average PCE factor for each collaposed truck matrix by period. The code will then be repsonsible for applying the correct weighted PCE factor to the truck PCE demand for each period, so that each period correctly converts the collapsed truck demand back to vehicles which get saved as the UDA (truck volume for each period).

Building Pop Syn zonal totals from Pop Syn tables

It's overly burdensome to have the analysts ensure that the pop syn and zone inputs in the ABM are consistent.

The setup should be -
The input version file holds the pop syn controls (inputs) at the TAZ/MAZ level, but if they are not consistent it's not a problem.
The code takes the input household and person table and builds the TAZ and MAZ zone tabulations from those data-sets and then updates the appropriate fields in the working (output) version files - and uses those fields for the ABM (if they are used at all).

This will greatly simplify setting up new scenarios with different syn pops.

Pull Fare District amounts into an input file

Currently transit fare districts are hard coded. Fare districts need to be established at the TAP level and then fares by district coded in some input file location (not in a script). The user guide will need to be updated accordingly.

SWIM external model

We need to change the run of the SWIM external model, so that there is a prep step that sources in the standalone SWIM external model functions.

The issue is that we have updated / corrected / improved the SWIM external model code for OSUM, and we want to implement it in the ABM, but we don't want to have an ABM script and an OSUM script, we want the functions script to be the same for all our uses.

One solution might be to store the SWIM external model code in the dependencies folder so that there can be an easy pointer to sourcing that script.

Then there would be a script in the scripts folder specific to the ABM operation that setup for the external model files, sourced in the functions, ran the functions, and then saved everything in omx format.

The important part here is that the functions need to be brought out into a separate script that is developed outside of the ABM process, like an R library.

Input Error Checker

Capturing this thought:

I have been meaning to start a list of mandatory checks on the ABM network for QC issues like this. I’m envisioning a contingency task, where we develop and establish a “live” script that ODOT can easily modify. The purpose of that script would be to look over the Visum file for any “duh” issues that the user overlooked.

All transit lines need time profiles and headways.
All TAPs need routes
All links need speeds, capacities, FC…
All nodes need…

Readme update for git-lfs

Installing anything on ODOT machines is difficult. As we are getting TPAU members to test this setup we are having issues with git-lfs. We have work-arounds, but I would like to have RSG add to the home page readme a list of repo files that are stored on git-lfs as opposed to regular git. This way, if there is a fail (like we are currently experiencing, where we just get a 1kb pointer file instead of the actual file), we can have a list of files that we need to go back and hunt down some other way. Currently on the readme I believe the only file to list would be:
https://github.com/RSGInc/SOABM/blob/master/dependencies.zip

Speed up ABM initializing

Should look for ways to speed up all the Visum pre-processing in the ABM run batch file. Comment from Ben, "The 30-60 minutes for VISUM at the beginning is related to changing the zone systems, rebuilding the polygons, and building TAP and MAZ connectors. We could make it faster by not looping through objects with VISUM’s API and instead getting all the data, doing the calculations in numpy/pandas and then bulking loading the results back into VISUM."

move user's guide to wiki

We need to move the user's guide to the wiki, link swim. @bettinardi - when we do, we need to change the order of the sections. Section 6 (Creating and Running a Scenario) should be closer to the start – if not the first thing.

PopSim Automated consistencies with ABM

Right now, if the user updates the Syn Pop, my understanding is that the user needs to:

  1. have a completed ABM with the old Syn Pop, or start a new ABM run and let it fail with the new Syn Pop.
  2. use the MAZ field out of the ABM, which is a MAZ seq, to update the Pop Sim output to have a MAZ seq number instead of a true MAZ identifying number. While a MAZ seq can be assumed for the Syn Pop, the only save way is to use the seq number out of a previous ABM run.
  3. The user then needs to re tabulate a series of MAZ and TAZ measures from the Syn Pop to the MAZ and TAZ tables for use in the ABM - to ensure they are consistent.
  4. The user has updated MAZ and TAZ fields from the Syn Pop Summaries that then need to be loaded back into the ABM tables (in Visum). The user can then take their updated household table (with MAZ seq numbering) and their updated MAZ and TAZ summaries and run the ABM with consistent and linked inputs.

When the ABM is running it needs to do the 4 steps above so that the user can just provide a new syn pop and the code ensures that the MAZ numbers are sequenced correctly and all the important MAZ and TAZ tabulations are updated to align with the Syn Pop that has been input.

Transit version files

Currently 15 tap version files are created, each with their own transit assignment by set 1-3 and time of day and their own results. Two issues:

  • It creates a lot of files that are hard to interpret and take up a lot of space
  • The user needs a transit file (or an everything file, highway and transit) where all the results across all sets and all periods are totaled up and put into a signal file where the user can see all the results in one file without having to open up 15 transit files to see what the results look like over the day.

This is related to cleaning up the end state and file naming in #50

review and better document VISUM attributes

@bettinardi : We need to review all the user parameters / attributes, so that any attribute stored in Visum is explained in the user guide, and if the explanation doesn’t make sense (like user changes TAZ attribute in 3 different places), then the setup/attributes needs to be changed so the user guide makes sense (example, change the TAZ attribute in this one place).

Besides the TAZ example, one other example called out in the contract is the lanes. We will likely never have lanes by period. My thought is that the lanes by period attributes can stay, but the user guide would be updated to say that those are internal working fields (not to touch), and that we would setup a procedure to copy the lanes attribute over to the 5 lanes by period fields – and explain that operation in the user guide – explaining that if a user ever did want to code different lanes by period they could turn off that copy procedure and code those internal working fields by hand. That is one lane example – there are probably others.

Syn Pop Context for the User Guide

As we are updating the Syn Pop for the ABM, we are finding all kinds of questions about how the ABM is using the Syn Pop information and how to build the new syn pop correctly.

The user guide needs to spell out each field that it reads from the syn pop household and person tables and what each field needs to be.

Examples:
MAZ needs to be a sequenced number (although there should be a note in the user guide that the code does this sequencing automatically, assuming #53 is addressed).
Sex - what does the code assume 1 is versus 2. Is there an option for none provided, or are the values of 1 and 2 strictly required.
Building type - what does the ABM code assume each building code stands for, same for occupation, work status.... all of them.

Each field needs to be spelled out in the user guide, so that a new analyst can properly sink up the PUMS record output with the codes / fields that the ABM requires. While it may seem straight forward now, the PUMS can always change, so there needs to be a clear record for what was assumed for each code /field.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.