Un Avatar per la security

0 Comments

Un Avatar in Security, i kiostri “intelligenti” al  Miami Airport , riprendiamo un articolo ancora attuale, per introdurre le nuove tecnologie della sicurezza

By Gary Peters

Robotic detection: a new approach for airport security?
A San Diego State University professor has developed a robotic kiosk that could help detect travellers with hostile intentions. Could this be the next wave of airport security?
The only thing that’s long and cumbersome about the AVATAR system is its name: Automated Virtual Agent for Truth Assessments in Real-Time. As a physical form, it’s similar to an airport check-in or shop self-checkout kiosk.
The difference comes in the technology. AVATAR is all about understanding behaviour and, crucially, detecting potential security risks. It is currently being tested by the Canadian Border Services Agency (CBSA).

“What are the strategies people use when trying to manage the impression that other people have of them?” asks San Diego State University management information systems professor, Aaron Elkins, part of the team behind the invention. “It [AVATAR] is based on our understanding of the subtleties of behaviour.”
Passengers are required to stand in front of the kiosk – which has a ‘face’ on the screen – to be asked questions, such as: ‘are you engaged in smuggling?’
An array of sensors judges the reply, and responds accordingly. If it believes something warrants further investigation, the individual is flagged for further questioning by a human border official or security personnel.
Here, Elkins delves deeper into the development of the technology, his hopes for its future use in airports, and why AVATAR should be seen as evolutionary, not revolutionary.

Gary Peters: When did the idea for this security kiosk first emerge?

Aaron Elkins: It started by looking at deception detection. So, how people interact and the strategies they use when trying to manage the impression that other people have of them. It is based on our understanding of the subtleties of behaviour.
What happens if you add a bunch of new sensors, such as a thermal camera and record the audio? Could you improve the ability to detect deception?
We started to think about finding a way to standardise it and focus on the behaviours of deception. That’s where automation came in, so a computer version, which we called AVATAR.
[That led] to work with Frontex [the European Union (EU) border agency]. We also spent time in Romania, at Bucharest Henri Coanda International Airport, where we set up the system at international arrivals, seeing how people interacted with it.
The CBSA is interested in the science and how it will help the screening processes. We’re working with them to analyse the results and see what we’ll do next.

GP: What technology does AVATAR use?

AE: It has different tools and sensors: iris scanners, finger prints, facial recognition. It has video cameras and microphones, and eye trackers to measure pupil size and how that might change during the course of an interview.
We also have floor sensors to judge how people are standing. It really is a myriad of sensors.

GP: Can you explain how it works?

AE: We hope the passenger experience will be convenient and quick. Rather than having to wait in large queues to get to border security officers, we envisage there will be a bank of AVATARS that can be used to self-screen by scanning your passport. It pulls up your information and the system knows what questions are pertinent based on your status, visa, country of origin of travel, and so on.
It will be similar to how you check in with an airline for your flight. You can get through the process faster, as the majority of the questions usually asked by a border official have been asked by the kiosk. The human component is filling in the blanks, or what the AVATAR suggests needs following up.

GP: Are the questions personalised to each passenger?

AE: There is a standard set [of legally bound questions]. But, there will also be those that are relevant to a particular passenger. Based on responses, the question set will evolve to investigate and build on any issues that have come up.
Generally there are some baseline questions. And we can determine when answers become different emotionally. We could keep track of that, and ultimately say ‘OK, thank you, there’s an officer waiting here for you.’ Or, ‘we asked you if you paid for your own visa and you thought a lot about it. Can you tell me more?’

GP: How can you ensure that it will provide the correct answer? For example, how will it differentiate between someone who is lying and those who are just nervous flyers?

AE: It is important to note that the system doesn’t make an assessment of lie or truth; it is not a binary system that makes that kind of call.
What it actually does is dig deep into the behavioural reactions of the passenger. So, when asked about drugs, for example, it might notice a pattern that someone is answering with more anger or uncertainty. It is essentially looking at the behaviour that might imply there should be further investigation.
It is giving us emotional information. Someone could be bothered or agitated, but that doesn’t mean they are lying. So let’s look deeper. Just because someone is upset…they might be a nervous flyer.
AVATAR can look at behaviour on such a small scale.

GP: So your system detects changes in behaviour, and then passes the final decision on to the human border official?

AE: AVATAR is an interviewer, actively listening. A lie detector evokes images of a polygraph. We’re almost opposite to that.
With AVATAR, nothing touches the passenger, unlike a polygraph. Also, a polygraph tends to be used to investigate something that has happened, not things in progress. They want to elicit a confession. Whereas, AVATAR is meant to be 30-40 seconds of screening, incorporating identification and authentication.
It’s more than just an emotional behaviour detector; it’s meant to be a screener, something that checks identity, greets the traveller, does customer service; almost a human participant in the whole process.
Security tends to be focused on the story and the language, at the expense of every other behavioural cue. It’s about teamwork. AVATAR is meant to gather more information for the decision makers.

GP: Could you see a time when technology, such AVATAR, replaces human security officials?

AE: I don’t like to predict, because I’ll probably underestimate the technology, but I’d say no.
I still believe there needs to be that final say, with someone re-interpreting the data.
There could be some scenarios where the system could make low risk judgements for trusted travellers, for example. But, for the really high stake things, I wouldn’t see the current setup changing.

GP: Would you look to place these kiosks near arrivals and departures?

AE: That’s what we imagine.
You have a group of kiosks, get the screening done, and move on. The thing is, no-one will want to use this if it doesn’t make their journey faster.
This is one of the reasons why it’s not the fastest transition to get a technology like this into an airport, because you couldn’t just drop it in. You would have to re-imagine some of the screening processes, the queues, and so on. It would need to be designed in tandem.

GP: As of December, the system was being tested with the CBSA – how is that progressing?

AE: The science continues. We’ve been talking with other governments and private companies, too.
One of the challenges with the tech is that while we can demonstrate it works great in one scenario – such as in Bucharest – there are different questions, cultures and languages [to think about when using it]. So, it ends up becoming a brand new system that has to be adapted to different needs and the culture.
Now we’re investigating new forms, whether that is fraud detection or counter-terrorism.

GP: What reaction have you had from industry?

AE: The majority of the people we work with like the technology and find it fascinating.
Sometimes the initial reaction is scepticism, but that tends to go once they understand how it could support their work. Generally it’s been pretty positive.
That’s what we did with Frontex. We had representatives from most EU countries come, use the system, talk to us, and give us feedback. A lot of the development has come from that; the direct feedback from border and security officials.

GP: Are you aware of any other similar systems to AVATAR?

AE: I’m not aware of anything that works in the same way to this. There are systems that study your voice or your eyes, but generally they are not focused on rapid screening. They tend to be polygraph extensions.
The thing that makes our system work and gives it the ability to detect is that it is anthropomorphised. There’s a face to it, an intelligence and personality. That personality can be changed to be more aggressive or more facilitating.
The individual components do exist elsewhere, but I haven’t seen it all rolled into one, as with AVATAR