Mapping the Construction of I-485

 

485-by-year-final

The geography of I-485’s construction begins in the south of the city. This immediately starting providing relief for the increasing volume of traffic due to the growth of the suburbs in south Charlotte and near the South Carolina border. The next order of business was connection the attractions, university, and high population suburbs in the northeast of the city. Finally, the west quadrant of the road was completed, alleviating traffic on Billy Graham Parkway around the airport and connecting the I-85 – I-77 bypass in the northwest.

I-485 broke ground in 1988 and was a completed beltway in 2015. It took 27 years to build 67.61 miles of the interstate at a rate of 2.504 miles per years. Compared to other beltways, this is a relatively lengthy period of construction.

I-270, a beltway around Columbus, Ohio took 13 years to build, being completed in 1975. It equates to 4.228 miles per year construction. This partially due to the stimulus provided by Federal-Aid Highway Act of 1956 championed by Dwight Eisenhower which provided resources for state governments to jumpstart construction on the interstate system that we know today.

I-465, the beltway around Indianapolis, broke ground in 1959 and, drawing from the highway stimulus, its 52.79 miles were completed at a blistering 4.799 miles per year.

Constructing this collage of maps in ArcMap provided exposure to some of the more intermediate functions of the design toolkit. After acquiring the interstate highway data from the Mecklenburg Open Mapping portal, 17 data frames were created to represent the 17 phases of I-485 construction according to the history section of the I485 article which cites The Charlotte Observer.

In the layout view, I navigating to the data frame tab in the properties menu to set the extent of each map to mimic the extent of the first data frame I set manually. This, I believe, was the optimal design choice to map the different phases of construction. This is the shortcut for manually adjusting each map’s extent which would have taken a considerably longer time to accomplish. It also ensures consistency with the design.

I enabled grids in the layout view which is nice for checking the alignment of elements at a glance. Also, I adjusted the page layout to allow a custom margin (20 inches by 8) when exporting the map as an image.

Finally, I made use of the distribute tool which is in the right click menu of the layout view. This easily allowed me to align all the rows automatically, eliminating the need to manually align each individual data frame. Each row was aligned horizontally and then vertically to ensure each was in the proper position. The same method was used for the text above each frame. Some instances of the text, however, needed to be adjusted manually.

This was a fun exercise. I could take it further in the future by color coding the map according to how many lanes each section has. This would allow the presentation of lane widening projects which are still ongoing on 485 as well as many other beltways and interstates around the country. It would be interesting to compile all the phases of construction into a short movie or .gif. This would require going back and upscaling each data frame individually to get a useable resolution.

Reflecting on the design, I’m not sure how to deal with the text labels. Looking at a glance it can be confusing which map a label is referring to, the map above or below. Perhaps I could have put the label in the middle of the beltway to clarify exactly what map is being labeled. This might have allowed each map to be bigger. This might detract from the negative space in the middle of the beltway and give a cluttered appearance. The use of lines might have been appropriate to border each map with its label. This might have made the map too busy. I’m happy with how it turned out. It’s always good to consider the alternatives.

Mapping the Electoral College, Reality vs. Hypothetical

How much does your vote actually matter? This year’s presidential election was an interesting affair to say the least. The votes haven’t been completely counted as of this writing but the winner of the popular vote and the electoral are likely not be the same candidate.

The popular vote winner / electoral vote winner discrepancy isn’t unprecedented. In 2000 George Bush won the presidency despite Al Gore winning the popular vote. We’d have to back to the 1800s to find the other two instances, Benjamin Harrison’s electoral victory over Grover Cleveland’s popular victory and Samuel Tilden beating Ruther B Hayes, who was the winner of the popular vote. The latter was overturned in the Compromise of 1877, promising the removal of federal troops from the South in an attempt to satisfy the popular sentiment in exchange for a Hayes presidency.

Hillary Clinton will likely become the 4th presidential candidate in American history to win the popular vote but lose the electoral vote. This has brought the role of the electoral college into question in many circles. What role does the electoral college play?

In a representative democracy like the United States, people elect officials to represent their interests. The 538 electors that make up the electoral college include the 435 representatives of the house, the 100 senators representing the states, and 3 electors representing the people living in Washington D.C. Typically, the electors will vote in accordance with the popular vote but it’s interesting to note they are not legally bound to do so. The college was a system that was originally implemented to assure that states with small populations would have a fair say in the elections. Article II, section 1, clause 2 of the constitution is the origin of the electoral college’s use in elections.

Let’s look at how the electoral college represents the population.

electoral-college-2016-final

Higher population, more electoral votes. Electoral college delegate redistribution to reflect changing populations is left up to the state. Let’s take a look at population and see how it compares.

population-2015

At a glance everything looks fine. Colors are similar and correspond between the two maps. Let’s compare electoral votes and population mathematically. By dividing the population by the electoral votes we can see how much of a state’s population is represented by 1 electoral vote.

electoral-weight

Lighter colored states have lower populations per electoral vote meaning someone’s personal vote is worth more in a light-colored state than a dark-colored state. For example, voting in Wyoming, the lowest population per electoral vote, will give your vote 3.62 times more electoral weight than a vote in California, the highest population per electoral vote. This seems strange when first considering the differential. Let’s take a look at voter turnout in the 2016 election.

voter-turnout-2016

90 million voters out of the estimated 231 million that are eligible to vote didn’t vote in the 2016 general election. According to statisticbrain.com 44.4% of people didn’t exercise their right to vote, one of the most critical rights in a democratic society. In the above map we can see some interesting correlation. California’s turnout is the lowest after Hawaii, is it fair that California would receive population-based electoral votes considering the amount of voter apathy? Should Florida receive the same amount of electoral votes as New York despite having a notable higher voter turnout? Should voter turnout even matter at all when considering the allocation electoral votes? Does it play a role in reality?

Let’s adjust the electoral vote per population by the percent of voter turnout.

population turnout adjusted.png

Nothing significantly different. The Northwest voting block is relatively stronger. The Rust Belt as a region sees an increase in voting influence per person on the electoral college. The California-Wisconsin comparison made earlier has seen its ratio drop to 2.71 compared to 3.62 meaning a vote in Wyoming still carries 2.71 times the electoral influence as a vote in California.

Let’s see what the electoral college would look like if it were adjusted to reflect these numbers. If we take the total voting population of each state and redistributed the 538 electors among them, excluding D.C.

electoral-college-redrawn

Of course in reality the number of electors has to be a whole number. You can’t have .5 of an elector. A few things to consider: Florida’s electoral power has significantly increased. California’s has decreased. The Great Plains states have had their electoral influence lowered. The Rust Belt has seen an increase across the board. Of course in this scenario, changing the number of electoral votes based on voter turnout might encourage and discourage people to show up as their state’s electoral influence waxes or wanes.

Perhaps this constant evolution of the electoral college would be a viable solution. As a representative democracy, citizens should expect accurate representation every time they fill out a ballot. Maybe this kind of feedback on voter turnout is excessive and people may feel punished by the apathy of voters they may share a state with. It’s also important to consider that, in reality, each state has 2 senators and at least 1 representative of the house, making the lowest possible electoral votes for a state 3.

If the election were held with this electoral college, including D.C.’s 3 votes, Trump would have beat Clinton 312.9 to 225.1. In reality, if Michigan goes in favor of Trump, the count will be 316 to 228 which is surprisingly similar. It’s not hard to imagine some federal statisticians crunching numbers and tweaking the electoral college in a back room somewhere in Washington. It’s interesting to consider that if we remove the Rust Belt states of Pennsylvania, Ohio, Michigan, and Wisconsin, the count would be 222.6 to 222.2 proving just how much of a role the Rust Belt plays in this scenario as well as deciding the election in reality.

Nobody can predict the  future course democracy will take in our country. Perhaps the representation of the electoral college will change, be replaced, or continue on in its current state. As of now, it plays a key role in directing the will of the nation through the representatives that champion the American democratic process.

They say the devil is in the details. And the details of this democracy are definitely geographic.

Mapping American Marijuana Laws 2016

After one of the most outrageous election cycles in American history, the geography of American marijuana laws quietly changed. 3 states joined Washington, Washington D.C., Alaska, Oregon, and Colorado with successful ballot measures legalizing the recreational use of marijuana. Florida also joins the ranks of states where psychoactive variations of THC can be used for medical purposes.

marijuana-legality
Marijuana legality as of November 10th 2016 in the United States

The above map shows the updated status of marijuana legality across the nation reflecting to successful ballot iniatives of the 2016 election cycle.

The geography is interesting to consider. The west coast is the geographic bastion of legal recreational marijuana use. This isn’t hard to believe considering the historically liberal attitudes these states have held when legislating the plant. In the mid 1990s the effort to legalize medical marijuana originated in California and spread in a similar fashion.

Massachusetts, nestled in the similarly minded New England region of the northeast united states, becomes the first east coast state to legalize marijuana recreationally. Considering the trend of medical marijuana in the region, it is likely to become similar to the west coast, with legality slowly permeating through the ballots of all the northeastern states.

The southeast, more conservative with this type of legislature, allows nonpsychoactive treatment of a limited number of medical conditions. This includes THC in droper or pill form but not in its plant form.  Florida marks the first step in introducing legislation to the region in the form of psychoactive medical treatment. This is the plant form of marijuana that contains THC that many people may be familiar with.

Washinton D.C.’s previous decision to legalize recreational use is interesting considering it’s geography. It’s nestled at the crossroads of the southeast and New England, two regions which have different legislative opinions regarding the plant. D.C. is unique in that recreational use is legal but you can’t legally buy it anywhere in the district. Marijuana dispensaries are not allowed to operate within its borders.

In the middle of the country in the breadbasket of the Great Plains, marijuana remains illegal in all forms, medical or recreationally. As of today, 7 states remain where marijuana is completely illegal. If marijuana continues to be a state’s decision, these states will likely be the last to legalize medically and recreationally, if they ever decide to

Word Clouds as Data

One of the most fun ways to visualize data is the use of word clouds. A word cloud takes a source of data, whether it’s a word document, webpage, transcript, book, or any other medium that uses words, and presents it in an easily digestable visual manner. We can make a word cloud for Thoughtworks to see the most commonly used words.

download-1
Word cloud of all the posts on Thoughtworks

Using wordclouds.com I created the above word cloud of thoughtworks. We can see, at a glance, that the word “data” is the most used word on the blog, considering its size. The larger words are the words that appear with the highest frequency.

You might be asking “what does this do for me?”. By looking at this picture we can see that this blog talks about data, maps, and several  other common words in the marbleracing entry. The key is “at a glance”. Data visualization makes complicated data that may take in-depth parsing to find the interesting and relevant details easily accesible by presenting it in a way that emphasizes frequencies of, in the case of the word clouds, words. So, by glancing at the word cloud of Thoughtworks, we can quickly interpret what kind of data (words) it might contain.

We can take this further and apply word clouds to chat rooms, message boards, and other social mediums to quickly and visually represent the gist of the conversation. We can use it to visualize articles and glean the main points or subject matter and we could use it to easily create accurate categorical tags for content.

For an example, while watching the youtube playlist for Crash Course: Philosophy I was curious what the main ideas were in the course. This might help me understand, in a general sense, what the playlist is all about.

Since Crash Course is a video series, we’ll have to transcribe it into words to be able to represent the data as a word cloud. Luckily, all the episodes are already transcribed on the Nerdfighteria Wiki. This is another situation where the manual transcription of data would take hours or even days but luckily for us the internet and the curious, hard-working denizens that occupy it have made this data available.

I went ahead and included all of the transcripts in a single word document that can be easily interpreted by the wordclouds.com.

crash-course-philosophy-transcript

Running this document through the word cloud generator produces the following result:

wordcloud-1
Unedited word cloud of Crash Course: Philosophy

Immediately, this word cloud might not provide us any viable data visualization. It includes words that are in the regular conversational canon and the philosophical terms I’m looking for are obscured under their sheer volume. We can edit the word list to create a more relevant word cloud. This world cloud parsing, however, can dimish the visualization  or enhance it depending on what you’re trying to achieve. So let’s careful and thoughtfully eliminate words that don’t contribute to the visualization, particularly conversational words.

Editing the world cloud presents a subjective interpretation of the data. It might be beneficial to arrange a criteria to choose what words are relevant and which are not to create a more objective visualization. I chose to remove conversational words and keep concepts that could be directly tied to philosophy. The frequency of the words is also important to consider as it determines the size of the word.

wordcloud-2
Word cloud of Crash Course: Philosophy after editing the word list

We can see from the visualization that God is the most common word in the series. In fact, the first 10 or so episodes cover religion and the role theology plays in philosophy. This gives a general gist about the content of this series. Since the disparity between frequency is so high, a lot of words were not included to emphasize the difference in frequency. If we manully adjust the word list to reduce the outlying frequencies we will be able to include more words. As always, this data manipulation adds another layer of subjectivity. The visualization moves more into the realm of artistic representation. I decided to classify the count in order to lower the emphasis on the higher frequencies

wordcloud-3
Word cloud of Crash Course: Philosophy after editing the world list and count

 

.

 

While the size descripancy has been addressed, 150 words are still being excluded. I decided to switch to another website to see if I could get better results.

Tagul.com seemed to have some satisfactory functionality. Unfortunately, it doesn’t allow high resolution downloads without payment. I went ahead and manually parsed the data so it would be readable by Tagul and ran the visiualization. I’m pleased with the results.

Word Cloud (1).png
Word cloud produced with Tagul.com with the world list and counts adjusted

Higher resolution here.

I feel like this cloud provides a good mix between size and number of words.

Word clouds are interesting visualizations to provide a unique take on textual data. Their unique aethetics provide an artistic expression for the usually gloomy aspects of data science. Quality data is worthless without proper presentation and the use of graphic arts bridges that gap.

Mapping Dropout Rates in Charlotte, NC

“Education is the most powerful weapon which you can use to change the world.”

-Nelson Mandela

This project looked into the dropout rates in the city of Charlotte over 4 different years; 2004, 2008, 2010, 2012.

I used ArcMap to map the dropout rates that were reported in the Quality of life reports the city of Charlotte publishes yearly.

Quality of Life reports

These pdf reports are deprecated after the release of the new GIS applet to report this data.

Quality of Life Explorer

Methodology

I created a spreadsheet to curate the data of several of the reports.

I then compared the data to create the values of change. Where no values were present, I left the value as null. If there was no data in 2008 but data in both 2004 and 2010, I compared 2004 directly to 2010.

For 2012 I used the included spreadsheet data. The shapefiles were different because the Quality of Life organization changed how they collected data, making direct comparison with the software difficult.

neighborhoods

I used the following scheme from colorbrewer for the data maps and used an included ArcMap scheme for the change maps.

color brewer.PNG

I included a map with all 4 years visible for easy comparison.

charlotte dropout map.png

To classify the data I joined the excel spreadsheet with the neighborhood shapefile, using the NSA neighborhood identifier.

classify-the-data

I used the following 6-class manual classification across the 4 maps.

classify

I used the following 9-class manual classification to map the change maps.

classify

This project gave me exposure to the manual input of data, which is mechanical and boring but I find intrinsically rewarding for some reason. I had to manually enter the data from the quality of life pdfs to a spreadsheet which was time intensive, taking about a hour per report. In the future if I’m ever parsing data in this format, I’ll use an autoscrolling feature to automatically scroll the reports while entering the data at the save time. This, in theory, would take half as long to enter the data.  This exposure to data entry opens the door to other presentations of data through ArcMap and other data manipulation applications in the future.