Digital Footprint Profile – v1.0.0

Digital Footprint Profile Logo

 

I just finished (for the most part) one of my recent projects, Digital Footprint Profile! This was one of the first projects I’ve done involving OAuth flows and connecting to external accounts and yet it didn’t turn out too bad at all.

What Is It?

This web application is a way for people to view their ‘digital footprint’ and is able to find simple things that should be removed for looking bad on them – Such as profanity, racial slurs, and tone.

Digital footprint is essentially what you have left on the internet. It is something you did, posted, commented, or favorited/liked. Looking at this information, colleges, employers, and many other agencies may use this information to gauge your maturity and your online presence. But most of all, you might be quite the jerk when taken out of context so this application doesn’t display context to help give a different perspective on things. One of the key concepts of anything with the internet is that it can be saved, forever. Once you post something, where ever it may be, it will be saved. This isn’t a pre-warning system, however it is a correction system. It lets you know about things that you have said in the past and should remove or revise. That being said, it isn’t like the newer The Simpsons episode “The Girl Code” and how it notifies you what could happen before you post but it does let you see what others will see after you post to help you learn for the future.

Why Did I Make It?

I was originally asked by my school district’s Positive Behavioral Interventions & Supports (PBIS) team. They were coming up with the idea of creating something for students to access to help them assess what they’ve said online and provide an easy way to clean all of that up. It will never be perfectly clean since what is posted is not deleted with most social media sites, but it does help in the sense that it gives them ideas of what should stay in your head or stay offline.

I accepted the task because it would be interesting to work with the OAuth flows for random social networks and see if I could make use of the data. Let alone, I don’t really like stereotypical social media users. Put together, it was more so a learning experience as I made a challenge to keep it as lightweight as possible by committing to not using a database (yes, no database at all) and not using frameworks (outside of what is included by the libraries I used).

How Does It Work?

The entire thing works by first asking each social network you want to scan for permission (OAuth) to view the content necessary to work. When the social network redirects you back, it downloads the most content the network will provide and stores it for use in the next step. When you’re done logging in with all your networks and click “Next”, all your posts that were received get sent through a scoring algorithm to make a simple sore of ten from several key points such as keywords and tone. It then filters anomalies and other posts that shouldn’t need attention by only showing ones with a score of three or higher. This method works quite well but doesn’t work well for longer posts where you might had one instance of profanity in a 300 word post.

Whats Next?

Many updates will come from this as I have learned a lot. The scoring algorithm needs to be refined by either implementing a new, more complex sentiment analysis library or adding multiple to improve accuracy across multiple forms of analysis. Secondly, machine learning would be nice to get into with this but would need a large sample size and such to be of any use. I’d also like to implement features to have loading bars and such while it pulls the content so the user doesn’t sit on a white loading page at the callback URL. Much will come and I will be keeping this updated because I do find it a particularly useful tool.

Release Notes

  • Initial Release

Full change log on GitHub.

Author: Zachary DuBois

I am a person who makes random things and likes to problem solve.