Digital Pathways’ Colin Tankard looks at how we reap the rewards of AI while avoiding the risks.
Artificial intelligence (AI) and machine learning (ML) are two very hot buzzwords right now and often seem to be used interchangeably. They are not quite the same thing, but the perception that they are can sometimes lead to confusion.
Machine learning is a type of artificial intelligence (AI) that allows software applications to become more accurate in predicting outcomes, without being explicitly programmed.
AI is the process of simulating human intelligence, using machines, especially computer systems. The process includes learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions) and self-correction.
In smart buildings, AI is already being used to control the environmental needs of the people working within the building. For example, monitoring the volume of people in any area and using this intelligence to decide if ’air-con’ should be switched on or if the lowering of shades or opening of windows will suffice.
Another example is the controlling of the smart building environment outside of hours, by counting the number of people in the building, or noting when unusual events happen, and acting accordingly.
All of this, and more, is with us today and will continue to expand into our daily business and personal lives.
Although the benefits look good, there is a fear that such AI programs could ’go rogue’ and turn on us, or be hacked by other AI programs. Hackers love artificial intelligence as much as everyone else in the technology space and are increasingly using AI to improve their phishing attacks. The need for innovative and robust data security therefore becomes even more important to the management of the smart building than it is at present.
Imagine a hacker taking over a building’s security system by accessing the system’s intelligence and having all key personnel move to one room, under the auspices of a ‘gunman threat’. Once the key people are in the room, through the AI’s skill in facial identification, it is locked by the system and ransom threats sent to all the computer screens in the building using Ransomware tactics, to make people react quickly i.e. ‘the ticking countdown clock’.
Although AI looks good, our current buildings are not so ’smart’ and the systems installed use old technology. Simply bolting on AI will not give the perceived benefits as it will be held back by the lack of integration.
Given the high cost of system replacement, such as HVAC (heating, ventilation and air conditioning), it will be some time before there are the platforms available to exploit the benefits of AI. It is clear that the step-change will be much greater between buildings that are new, versus those which are even only a decade old.
GDPR legislation poses another question. Will it be permissible to let a user give an application permission to make automated decisions on their behalf, such as recommendation systems?
These were first implemented in music content sites but now extend to many different industries. For example, the AI system may learn of a user’s content preferences and push content that fits those criteria. This can help companies reduce bounce rate, by keeping the user interested.
Likewise, you can use the information learned by your AI to craft better targeted content to users with similar interests. However, GDPR will see the AI application as holding Personally identifiable Information (PII), which might include age, gender and location, to present the information it has learnt from one user to others with similar profiles.
GDPR requires that the data be secure and used appropriately. But, with the AI program constantly learning and sampling data, this becomes a problem.
And, if a user does give permission for their data to be modelled, will it be accompanied by a comprehensible explanation of how the AI makes decisions and how these decisions may impact that user? This would be very difficult to achieve as GDPR calls for ‘clear language’ and AI code learning is far from easy to explain.
From a technical perspective, the level of granularity GDPR requires in explaining automated decisions is unclear. Until the picture is clarified, some innovators may choose to forge ahead with super algorithms. Others, worryingly, may ban European citizens from using some highly valuable functionality.
When thinking about automating important decisions and giving high-stake autonomy to AI machines, particular attention should be given to constraining their behaviour by defining what is desired, what is acceptable and what is not acceptable. This is what the Three Laws of Robotics of science-fiction writer, Isaac Asimov, say:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
AI power will need to be controlled and the three Laws of Robots need to be the mantra for AI programs. It should be mandated in all code that the AI programs should ask for human intervention when unusual situations are detected, or when the computed uncertainty in predictions/decisions is above a certain threshold.
This may go against the vision of AI, but until we can have total trust in the underlying code being used to develop the AI, we must show caution. Remember, humans are still writing the code and can make mistakes or, more worryingly, add code that will allow for future control of the AI for malicious means.
Trust in AI
It is almost impossible to say how an organisation can have trust in any AI unless they have access to the source code and the ability, or contacts, to read and debug it.
As AI is introduced into building systems it will fall on the facilities teams to question what level of code review has been undertaken within the AI module. This might be possible, if the designer of the AI is a large vendor who can show in-depth test results and other customer implementations, but most AI vendors leading the technology revolution are small and do not have the client base, or the volume, of test data.
At this point a difficult decision needs to be taken by management as to how far they ‘dip their toe’ into AI. A bit like with autonomous cars – they do work but governments are still wary of allowing legislation to be brought in to allow the technology.
AI is with us and will increasingly be integrated into the smart environment. Whilst the potential benefits are far-reaching, making lives better, the environment cleaner and providing efficiency to our personal and business lives, we must be aware of the possible threats it can create and take the appropriate action from the very beginning.