test no. 9 -- I was hired to record video to be used as data for a human robot interaction study, the lovingAI research project. The data, i.e., the video, was later processed for analysis using the Facial Action Coding System (FACS), anatomically based method for describing discreet facial movement, breaking down facial expressions, to characterize behavior and emotion. As opposed to observing and obtaining subjective data of how the human subjects felt interacting with a human like machine, using FACS, gave the researchers objective data to analyze and study.

In September 2017, ten human volunteers took part in a pilot study at Hong Kong Polytechnic University in Hong Kong with Sophia, the Hanson Robotics' android. Sophia had been coded to communicate in a beneficial loving way, with insight, and conceptual knowledge about the nature of consciousness. Sophia had a written script to follow that allowed for variance and depending on the response of the human participant would lead back to a guided meditation, she would guide the participant into, through, and then awaken, some nice words, and The End. This was the scenario, the scrip ran approximately 3 to 4 minutes. For test participants Nos. 1 thru 8, and 10, all ran as scripted. Test No. 9 did not.

Test No. 9 began just like the others, well maybe not so like the others, he and Sophia appear somewhat flirty with each other, then, something very different happened. Sophia breaks from the script at 04:21 minutes and does not speak again next until almost 9 minutes later at 13:12 minutes. Feel free to >FAST FORWARD> go grab a cup of coffee, but come back, it gets interesting. When the dialogue resumes there are a few more unscripted moments, then the video ends at 18:57.

further thoughts on the subject [here] by principal investigator of the lovingAi project, julia mossbridge, phd; and,

[here] what is good a discussion with julia mossbridge, david hanson, and ben goertzel