Cyber-attacks are becoming a major global concern. Not just against nation-states but also for a myriad of critical infrastructure services including healthcare, which is firmly in the cross-hairs of perpetrators. Healthcare presents an easy and lucrative target for cyber-attackers for the value of PII, PHI, and IP, but also (and increasingly so) for the extortion value of holding sick patients or their medical data to ransom.
It’s no longer just a case of opportunistic criminals and organized crime hiding in remote parts of the world that lack effective local law enforcement. Perpetrators safe in the knowledge that paid-off officials and a lack of international extradition treaties means that they can continue their pursuits at will. It’s now a case of nation-state cyber-military units attacking other countries for political and economic advantage pushing at the boundaries of cyber war, carefully calculating that their actions will not cause a kinetic or major economic response from those attacked or those shocked and appalled at their actions.
But cyber-attacks are increasingly becoming automated using AI to get past cyber defenses by removing the human constraint factor that causes an attacker to pause for consideration and to prevent an attack from going too far. ‘Offensive AI’ mutates itself as it learns about its environment to stealthily mimic humans to avoid detection. That's why it's increasingly being used in CEO fraud. It's the new cyber offensive weapon of choice and will automate responses to defensive measures rather like playing chess with a computer – it learns as it goes!
We are all used to critically evaluating an image to look for the tale-tale signs of photoshopping or other image manipulation before believing what we see. The same is true for audio recordings – was that really the President saying that or was it an impersonator? What we are not used to is video manipulation – this is new territory for our brains to critically process and evaluate for truth and accuracy. AI is increasingly being used in sophisticated technology to create ‘deepfakes’ where a face is superimposed on someone else’s body or the entire video is computer generated altogether.
But AI’s intent is not just to steal information but also to change it in such a way that integrity-checking will be difficult (if not impossible). Did a physician really update a patient’s medical record, or did ‘offensive AI’ do it? Can a doctor or nurse really trust the validity of the electronic health information presented to them? Ransom of patient lives may not be too far away – especially at times of heightened global tensions.
But AI is already being used very effectively for cyber defense across healthcare and other industries, such as advanced malware protection that inoculates the LAN and responds in nano-seconds to anomalous behavior patterns, and biomedical security tools that use AI to constantly manage and secure the rising number of healthcare IoT devices as they connect and disconnect from hospital networks. AI-powered attacks will outpace human response teams and outwit current legacy-based defenses. ‘Defensive AI’ is not merely a technological advantage in fighting cyber-attacks, but a vital ally on this new battlefield – and the only way to protect patients from the cyber criminals of the future.
Cylera's MedCommand™ technology makes extensive use of machine learning – one form of artificial intelligence to identify connected HIoT assets, perform a full risk analysis of each device, profile the legitimate traffic patterns of each device type for zero-trust security controls, alert on any anomalous traffic detected outside of legitimate traffic patterns, and even automatically remediate discovered risks with compensating security controls via a hospital’s existing network access control and/or firewall technology.
To learn more, please contact us for a no-obligation solution walk-through and demonstration.