Yesterday morning, Steve presented at FIRST 2016 on "Correlating Threats Using Internet Snapshots". The presentation he gave was one that's evolved ever since we were purchased by RiskIQ in September 2015. One of our primary goals outside of making the platform better is to ensure we are finding the best ways to communicate our message out to our user community. If we are able to clearly outline the value of infrastructure analysis, then chances are, others will feel more comfortable doing it.
Back in February, Steve and I both gave a similar presentation at Kasperky's SAS conference and at the same time, released a new version of our API with RiskIQ datasets. The presentation introduced the concept of "infrastructure chains" and used visuals to articulate how someone would go about making connections in the data. We also discussed the value of the new datasets and how an analyst could use them to surface threats that may otherwise have been missed. Reflecting on the talk afterwards, we thought it went well, but didn't feel like the message was as clear as it could be.
Taking the lessons learned from our first presentation, we decided to sit down and redo parts over again. This included things like the following:
- Articulating the three core products RiskIQ offers in order to show how the datasets added to PassiveTotal actually get created and used by the company. Our previous talk didn't mention any of this, so users had a hard time understanding where we were getting data like our host pairs or SSL certificates.
- Transforming the infrastructure chain visuals to show clear relationships between specific datasets (by connecting them) and using hexagons to represent the dataset allowing us to plug-in datasets into the gaps we created. This updated visual was far easier to speak to and allowed us to clearly show the difference between existing datasets and the new ones that analysts could use.
- Outlining a narrative around signals and how any attack conducted on the Internet is likely to leave some sort of unique artifact analysts could use to fingerprint the actors involved. Our previous talk briefly hinted at this, but it was never explicit with a clear example. Putting this upfront gave us a clean message we could reiterate throughout the talk.
- Identifying the short-comings of existing popular datasets like passive DNS, WHOIS, malware and OSINT in detail. Focusing in on the specifics of each dataset made it easier when talking about the examples later on.
- Overhauling all examples to include additional diagrams, explanations and support data, so that users could copy the infrastructure mentioned from the presentation and research it themselves using PassiveTotal. Our previous talk had us diving into the deep end far too quickly. The updated slides told full stories instead of just presenting statistics.
- Adding caveats and considerations throughout the examples in order to point out the challenges we faced when using the datasets or details we felt were important to impart. When we first spoke about these datasets, it was still early on in their release.
The end result was this final presentation which captured the essence of PassiveTotal and the data thousands of users leverage in order to surface malicious threats. The process of refining a message is not often glamorous or interesting, but we felt it was important to share how we are constantly evolving PassiveTotal, even if it's not always the platform itself.