There’s much to enjoy about being a student at the Rochester Institute of Technology. We’re a diverse campus with passionate professors and rich networking opportunities. We have over 300 student organizations and a co-op program where some degrees require paid, full-time work experience.

But, RIT has its shortcomings.

With a population of over 18,000, finding a place to park is a challenge experienced by students and faculty alike. Hundreds have taken to petitions pleading for better parking opportunities.

(Update 2/20/19)

Paving new parking lots seems like the obvious solution to many students. However, as brought to my attention, it may not be the most practical nor efficient approach to solving the parking crisis.

While creating new parking lots is out of students’ control, optimizing existing parking lots using technology is not. That’s why my colleagues and I created a tool called Blind Spot.

We built the software in 24 hours as a part of BrickHack V. This is a retrospective.

What is BrickHack?

BrickHack is RIT’s annual hackathon. Each year, hundreds of students from all over New York state come together for 24 hours to create new projects from nothing. It’s a day full of self-challenge and having fun with like-minded developers, designers, and makers.

A sea of 400 students in RIT’s Clark Gym for BrickHack V.

For the fifth iteration of the event on February 16–17, 2019, I teamed up with long-time friends Connor Egbert and Anmol Modur. We’ve made projects together in the past.

Last year for BrickHack IV we built Lndry, a real-time non-crowdsourced washer and dryer availability service. If you want to read more about my experience building that, I recommend reading Why I won’t be reusing my winning hackathon strategy.

Joining the three of us for the first time was Bahdah Shin.

From left to right: Anmol Modur, Connor Egbert, Bahdah Shin, Zack Banack (me)

We sat crowded at a foldable table for an entire day trying to solve one RIT’s biggest problems: parking.

The proposed solution

Our brainchild is Blind Spot. It reduces frustration and helps RIT students get to class on time.

Blind Spot utilizes existing overhead security cameras to determine available parking spots in real-time.

Our rationale behind using security cameras: campus is already covered in them. Plus, new software for pre-installed hardware makes adoption more enticing.

Of course, we don’t have access to RIT’s security cameras (for good reason). After receiving permission, we set up our own camera while building the prototype.

The prototype works like this:

The parking lot cameras stream their video feed over RIT’s network to a central server.

At the server, the camera footage is processed by machine learning algorithms.

These algorithms are able to determine where and how many vehicles are in a given parking lot.

We can relay this information back to students via a mobile interface/physical outdoor lights to tell them where they can park.

Machine learning recognizes cars in a parking lot

So long as we know the maximum capacity of a parking lot, we can determine if there’s an open spot based on the number of vehicles sitting in it.

If it sounds too easy, that’s because it is. There are a lot of things we must consider in order for this service to be reliable.

Problem #1: Faulty vehicle detection

If shown a video of a parking lot, the average human should be able to point out all the vehicles they see with relative ease. The more cars you see throughout your life, the more confident you can be in determining if a given thing is a car.

SUVs and dumpsters are both large, blocky objects. It’s knowing notable features, as well as context, that allows you to differentiate between the two.

Select all images with cars

Computers work in a similar fashion. But — at the time of this writing — they’re not quite on par. Ever wonder why those “confirm you’re not a robot” Captchas ask you to select all the pictures containing cars?

Computers use image classifiers to train their digital brains. The more data they’re fed, the more accurate they can be in their object analysis.

From the get-go, we knew one of the worst things Blind Spot could do is mistakenly tell a user that a parking lot has a vacancy. To minimize the chances of this happening, we needed big data sets of vehicles to train the algorithms.

In theory, Blind Spot should only get more accurate with its vehicle recognition over time. The more time it spends fixated on parking lots, the more it’s exposed to cars. Introducing positive reinforcements strengthen the computer’s perception of a “vehicle”.

Night time

A well-lit parking lot is necessary for more reliable object recognition. Fortunately, RIT keeps its parking lots reasonable bright (hooray, safety). This also made our lives easier as most of Blind Spot’s development took place after the sun had set.

Cars being recognized at night time in a dimly-lit parking lot.

Perspective and vehicle obscurity

Without the use of drones — which are banned at RIT — it’s unlikely Blind Spot would get access to directly-overhead footage of parking lots. This means smaller vehicles or those furthest from the camera could be obscured.

That’s part of the reason why I developed an administrative GUI. This web app allows us to easily map out parking lots in 3D space. It’s powerful knowing the precise location of every individual spot.

A .gif of the administrative panel

We can have the image recognition component only look for cars in specified regions. This means we can “filter” zones like handicap parking spaces.

Further, these regions can be used for data triangulation. Two security cameras overlooking one parking lot can strengthen the accuracy of the computer.

On a micro level, specific spaces can now be marked as either available or unavailable.

Problem #2: Driving while using the app

Team Blind Spot doesn’t want to encourage the use of cell phones while driving. In order for us to justify the release of a mobile app, there’d need to be navigation capabilities built-in. That way, it’s a glorified GPS with parking information baked in.

Family photo of the admin GUI, the iPad app, and the video feed; 20 hours in

Twenty-four hours wasn’t enough time to get all that done. I managed to churn out a React Native iOS app. While I was working out some additional features (like push notifications for parking spot openings) Anmol had something brighter up his sleeve.

Anmol got Philips Hue smart light bulbs to change colors based on the emptiness of a parking lot. Watching the lights gradually shift from green to red as the parking lot next to the gymnasium filled up was neat.

Philips Hue smart light bulbs (image credit: makeuseof.com)

Not only was this a creative way to visualize the data being processed by the server, but it also allowed an easy way to understand parking vacancy.

Our team showcased this concept to the BrickHack V judges when they made their rounds.

Installing “smart” lightbulbs at the entrances of parking lots would instantly tell drivers whether or not there was room for them inside.

This could reduce congestion for big events on campus. The lights could also be re-purposed to direct traffic or block off certain areas.

As the hackathon progressed, we realized Blind Spot wasn’t just for campuses. This tool could be used for businesses, venues, and municipalities.

Grocery stores and stadiums alike could benefit from Blind Spot. It doesn’t even need to be customer-facing. Receiving rich historical data and trend analyses on parking lots may reduce customer-company friction points.

Businesses looking to adopt will also find that the administrative GUI is pleasant to use. 😉

The tech stack

This flowchart changed slightly as the hackathon progressed (i.e. no Google Cloud)

We used a Raspberry Pi as a makeshift IP camera. It ran on the “motion” webcam streaming framework, but it was modified to stream video regardless of motion being detected. It’s not the optimal approach, but it worked just fine for the time we were allocated.

The central server is able to recognize cars and map them to a location using OpenCV and the Python YOLOv3 framework. It transmits this data to a Node.js web server that serves the API for the React app, the HTML5 admin tool, and the Philip Hue lighting rig.

The React Native iOS app is built on top of Native Base. It scrapes campus data from the RIT Campus Map, maps.rit.edu.

The HTML5 admin tool was built in… drumroll, please… GameMaker: Studio 1.4. That’s right! I used a game development engine to make the utility. Don’t let tools dictate what you can and can’t do. Push the perceived boundaries of the tools you’re using. Especially under constraints: work smart, not hard. I’m quite familiar with GMS’s HTML5 export module. It allowed me to get a canvas up and running in no time.

The future of Blind Spot

If you search for “Blind Spot” on the App Store, you won’t see this project. We laid down the framework for a powerful tool. But, we’re bottlenecked.

Without RIT’s cameras, Blind Spot can’t exist in the way we envisioned. As students, we knew our journey trying to resolve administration-level issues would end here. At the very least, we had fun and learned a lot in the process.

Maybe this project can gain some footing if an RIT higher-up sees it. Take a chance on your students. After all, the only thing we have to lose is our parking spot.