Proposed features/Traffic light red green cases by junction
|Traffic lights red/green case tagging and estimated times|
|Status:||Draft (under way)|
|Tagging:||case=ordered list of alphabetical letters|
|Rendered as:||not rendered but useful for routinf|
In traffic-lights-controlled junctions, some traffic lights give the green longer, some give shorter. By tagging red and green light tmes and case reprecancy, we can create faster routes according to waiting times.
You will give a letter for each time slice and denote which section gives the green light, and which does the red light. Yellow light timing is fixed so mapping that is not needed.
Relation and Area
For each a,b, etc time slice watch the lights and do an ordered list of time slices
- Capital letter:Green light
- Small letter:Red light
- Number sign:Yield
Interesting idea, but this model, in its current form, presents an extreme oversimplification, by making "degenerate case" assumptions about the problem (i.e. junction to be modeled). Therefore, the solution (the data to be captured) seems far too simple to provide a real-world solution.
In order to collect any meaningful data which would be useful in routing, you must first be able to completely define every signaling device, for every lane – not just turning lanes and thru lanes, but also those poorly-designed junctions without turning lanes, which nevertheless provide a few seconds of "green arrow" (to cross oncoming lanes) before switching to "all green". Murphy's law dictates that the lead car in that lane is going straight, forcing everyone behind him to stand still until the "all green" signal – those who wanted to turn must now fight with oncoming traffic, and even those wanting to go straight must now wait for the guy in front of him to cross the oncoming traffic!
Now, in order to predict the "efficiency" of that junction (or inefficiency, as the case may be), we'd need to beef up the data model a bit, because we'd need to collect more data – a lot more data, it turns out, because queuing theory and modeling of traffic congestion [Note: wikipedia links] are both extremely complex topics. So at the very least, for each desired intersection, you'd need to capture some statistical data, like how many cars arrive at a given lane, and when drivers are allowed to choose, how many make each choice. Of course, all of that data must be broken down by day of the week, and time of day, since the models will look completely different for the morning and evening rush hours.
Now, as you propose, you'll need to capture the signal light timings, also broken down by day and time, since we can't expect the signals to have static programming nowadays. On top of that, we still haven't taken into account the fact that some signals employ sensors to detect traffic back-ups, which adds a whole new level of dynamic complexity to the model. Finally, we shouldn't forget that some cities deploy still more technology, with sensors and traffic cams everywhere, all tied back to a central "control room" that links the traffic signals at every junction with every other junction – traffic engineers can dynamically tweak the flow on a city-wide scale – the human factor is the hardest of all to model.
By now, your head is probably spinning, as I know mine is, for this is a near-impossible task to come up with a model that still won't mirror reality. This is why the current trend is leaning towards live, real-time, crowd-sourced analytics – after all, every single commuter is equipped with a GPS-capable phone that is constantly updating it's exact location, direction and speed to the NSA, er, I mean, the carrier, and what better predictor of traffic than real-time updates from the army of people that just went through it 2, 5, 10, 30 and 60 minutes ago? It's a lot easier to build.