Search for content, post, videos

Remembering the Past | Engendering the Future

The philosopher Santayana is famous for having once warned: “Those who cannot remember the past are condemned to repeat it.”

Recent events have provided several sobering examples of just how true that is. The Apple-FBI dispute was perhaps one of the most recent, painful examples we’ve witnessed a debate the Australians have managed to continue down under. Both instances have been advanced with no small amount of handwringing, hair-pulling, and chest-pounding that might have been avoided if the parties had let the past inform their discussion.

Both Digital Rights advocates and certain tech giants share a concern about privacy, as traditionally conceived, and the “new” powers being contemplated for the intelligence agencies fighting terrorism, hackers, and other threats in today’s world. I worked with the FBI when similar laws, responding to the advance of new technologies, were proposed some years ago. That experience gave me an understanding of the parameters and to some extent, the pushback asserted by Apple recently and others in the past.

These are not new concerns. Back in 1994 when Voice-Over-IP [VOIP] was emerging in the Telcom industry, there was great concern on the part of Law Enforcement and the Intelligence Community that this new technology would undercut their ability to properly monitor validated threats poised to strike against the interests of the community. The Congress, the Private Sector, Law Enforcement, and the Intelligence Community came together and worked collaboratively to come up with a solution about which there was no small consternation – hand-wringing and worries that if indeed this legislation were enacted as it was expressed in that old movie “Ghost Busters” – “Dogs and Cats would start living together!”

We, in fact, enacted the legislation and those terrible things did not happen, punctuating the importance of Santayana’s point, that if we don’t learn from the past we’re doomed to repeat it. As important as these discussions are today, we need to take these lessons from the past; take courage and a deep breath, and move forward. Admittedly, Cybersecurity has grown to be a much more complex and serious of an issue than it was in 1994. However, there are equities -that are asserted as valid privacy concerns that need to be addressed.

Should National Security matters always outweigh privacy concerns, or should people’s privacy concerns be protected while authorities, corporations, and assorted others try to fend off these cyber-attacks?

cybersecurity-threats-cyber-privacy

In the current environment, Privacy as classically conceived, tends to be a dominant factor that legislators and others want to consider. We are, however, seeing an evolution in the population where the Millennials and Generation Z are starting to value something, that the literature calls out as “contextual richness”, more than the privacy valued by their parents or grandparents – those who hold seats in the current legislatures and who stand at the heart of recent debates. Not that they want to be cavaliers and throw security to the wind – it just means that they value the help that can be extended to them when they post or share details in social media that would have been considered “private” by their parents – what their schedule is going to be, with consideration given to the impact the weather may possibly have, etc.

The confluence of social media, digital mobile devices, sensors, and location-based technology is generating unprecedented volumes of information about society and individuals. A recent study found that taking stock of a person’s Facebook likes, for example, creates a more accurate personality assessment than one done by friends and family. Armed with such insights, digital devices and services can anticipate what we’ll need next and serve us better than a butler or an executive assistant, according to Age of Context authors Robert Scoble and Shel Israel.

Of course, such benefits don’t come without trade-offs. A Pew Research report, The Future of Privacy, explores these changes, the growing monetization of digital encounters, and the shifting relationship of citizens and their governments. As people increasingly value the contextual richness that highly personalized technology brings to life – Scoble and Israel’s Age of Context – the concept of privacy is evolving toward a new normalcy. And as people willingly share more personal information – on social media, with location-based services, and elsewhere – securing that data only for authorized uses becomes more critical and daunting.

The trade-off between privacy and contextual richness will continue to evolve, just as the advent of the Internet and digital media changed the concept of property ownership and copyright protection. The ability to make unlimited copies – without depriving the original owner of use – forced a significant expansion and retooling of legal protections of intellectual property and copyrights. The same must be applied to the evolution the way we protect privacy rights in the era of big data analytics, with the collection of ever-larger data sets from a myriad of sources. Today we grant specific permissions for the use of our information – both personal and aggregate usage – when we agree to privacy policies on social media and other digital services. Here at Cylance, we take privacy very seriously with a best-in-class privacy defense program. But as big data analytics grows through the application of AI’s machine learning – spawning secondary and tertiary uses downstream from the primary data collectors – it may become impossible to seek permission of all vested parties.

Data collectors may ultimately have to be accountable for how your data is used, regardless of the permissions they obtain up front. One solution will be to embed access controls and continuous authentication into data itself at the point of creation. With such self-aware and self-protecting data, organizations can ensure that it securely flows to the right people – and only the right people – at the right time and in the right location. Enjoying the fruits of our connected world requires the free flow of data to people, places and “things” – yet only the ones we authorize. When you’re staying at your favorite hotel, and your room service breakfast arrives 15 minutes early – because the traffic on route to your morning meeting is snarled and the concierge knows you’ll need a cab early – the benefits of sharing your preferences and schedule with the hotel are clear.

Yet, you only want trusted partners and service providers to have access to such data. Developing the necessary security and an accountability model for organizations that put personalized information and big data to use may take some time, due to past wounds that as Shakespeare said “haven’t felt a scar.” But if we do our jobs correctly, the benefits of our hyper-connected world should always outweigh the risks. The discussion of late has again taken on an intensity that is not unfamiliar. On my recent visit to Australia, I was interviewed on the morning news show and was asked to join in the debate, while being pushed to acknowledge that newly proposed actions are just the “creeping edge” of an ever-growing encroachment – or cyber snooping.

I stressed that to the millennials it wouldn’t be viewed as a creeping so much as a “facilitation”, making their lives simpler; more worthwhile; more effective and efficient. I stressed that nothing we’re going to propose in this environment is going to be risk-free. Nothing human beings do is ever risk-free. There’s always an element of risk. It comes down to what is our risk tolerance; what are the threats we’re trying to mitigate, and what are we willing to do in the way of mitigation to bring that risk to a level we’re all willing to accept. And of course, that’s the challenge, because we all have differing and varying levels of risk tolerance.

artificial-intelligence-privacy-security

The path forward requires communicating; working together in good faith; understanding that even when we put our first best-effort out there, it may need subsequent tweaking. We’re a resilient species. We adjust accordingly and move on, while understanding that the adversaries are not going to hold static during this process. They’re evolving and everchanging. Fortunately, the good news for us is that right now with the aid of AI and machine learning, we are finally reaching the point when we can stay ahead of the evolving threat. A recent SE Labs study substantiated that the predictive advantage of artificial intelligence, machine-learning-based solution extended out as far as two years and, in some cases, almost three years. That means the solution can predict two years in advance, the way in which such recent challenges as Wanna-cry were going to present themselves. It could have stopped these attacks two years before they presented themselves. This power is going to give Law Enforcement, the Intelligence Community, and our teams striving to protect our interests – and even us, as consumers – an advantage we haven’t had heretofore.

The focus and importance placed on “human involvement” – the role of the human – is going to have to be rethought. The challenge we face is the way in which the threats are now morphing or changing daily at a speed or rate that humans can’t keep up with – this is where a partnership with AI’s machine learning will allow us to extract what I call these “carbon-based” units – the humans – from these choke points where they just don’t have speed or cognitive capabilities to keep up.

Stephen Hawking, Elon Musk, Bill Gates, and other very bright individuals have recently issued clarion calls of caution and concern, even ominous warnings. Like any new technology, AI can be a two-edged sword, and thus, careful and considerate reflection in its deployment – informed by lessons from the past – is, as it has always been, the order of the day.

Leave a Reply

Your email address will not be published. Required fields are marked *