Monday, September 1, 2014

I Sense, Therefore I Feel Moody (Behavior Management in Robots)


(repub - original site went under.)


I Sense, Therefore I Feel Moody
June 30, 1999
Alan McDonley



In "A Model for Mood-Based Robot Behaviors" ( http://home.earthlink.net/~johncutter/rep_mood.htm - link no longer exists), John Cutter describes a nine state model for behavior selection based on two variables called mood and comfort. That model served as a starting point for additional thoughts on robot behavior selection.

This paper presents concepts linking senses to feelings by preferences, linking feelings to behaviors by mood, linking behaviors to goals and linking goals to preference selection.

I Sense
An interacting robot will consist of a suite of sensory inputs and output modalities. Sensory inputs generally map measurements of environmental conditions that are orthogonal. Light intensity, Sound pressure, motion detection, nudge detection, linguistic information, time, day and temperature are each orthogonal (strongly independent). Light, sound, time and temperature are continuous. Motion and nudge are discreet. Day is discreet and contiguous.

Linguistic input can be discreet or contiguous and can be a source of multiplexed input. Recognized words (either spoken, tones assigned to words, or morse code) can be chosen to convey discreet commands (dance|sing|silence) but can also convey contiguous values such as speed (slow-normal-fast, or silent-soft-normal-loud). Combinations of discreet and contiguous are possible with linguistic input sensors (dance slow | sing loud). Input of named combinations of values,such as "homebase" for an X-Y pair, or "quietly" for soft singing, slow motors, no alarms, becomes possible with a linguistic input sensor.

I Feel
Reaction to sensory input can be called feelings. The robot must be programmed with preferences, either static or dynamic, to decide how it "feels" about sensory input values. 
The preference allows the robot to convert a continuous sensory input into good, neither good nor bad, or bad "feelings" or discreet sensory input into good and bad "feelings"
Preferences can be natural or imagined. Natural preferences come from base desires such as self protection or knowledge preservation. Imagined preferences come from artificial decisions induced for the purpose of creating interest such as a preference for specific sound levels that might indicate the presence of humans to interact with.

A robot should have a natural preference for cooler temperatures because it lengthens the life of its circuitry, but cold temperatures might limit its off-base excursion time. Thus cold is bad, hot is bad, but some intermediate range is good, and some wider band is neither-good-nor-bad. Note that the sensor range is continuous and linear, the feeling range is contiguous and parabolic (bad-ok-good-ok-bad).

Light intensity and light transitions can be used to detect preferable conditions. In general, room lights being on means people may be nearby, ok to make noise, burglars are unlikely. Thus the robot might be programmed with an imagined preference to the room lights on condition. Of course, if the robot has a "loner" personality, then perhaps it would have an imagined preference to avoid rooms with the lights on. If the owner has expressed anger toward the robot, the robot may change its imagined preference for human contact temporarily by establishing a preference for dark rooms. A mapping of light intensity to feeling is, thus, dependent on a dynamic preference. In general the more light the better, but the absence of light is not necessarily bad.

Light level transitions can also be analyzed by preferences. The transition from room lights off to room lights on could be preferable, while the transition from on to off might be considered less preferable. Transition from dark to some light but not room light level might indicate dawn, and nearing time to awake the owner. Being needed is a preferred condition, thus a preference for light above dark. Likewise, rapid transition to less than room light level, followed by rapid transition to dark might indicate a burglar with a flashlight – a less than preferable condition.

Sound pressure and pressure duration can be used to detect danger, angered humans, unusual circumstances, music, speech, and other aural conditions. The robot can have imagined preference for music if it knows how to dance (detects a beat and moves in time to the beat). Of course a jack hammer pounding outside the house might cause the robot to dance in an empty room, but the robot will be "happier".

Motion detection can feed an imagined preference for people, but if the robot knows that the house should be empty, a preference to be alone would be natural. Being a discreet sensor, motion detection can only be evaluated to good or bad in relation to the preference.

Nudge detection can be discreet or contiguous by counting nudges within a time period. When moving, the robot might have a natural preference to not experience nudges from running into walls. When standing still, nudges probably mean a human is trying to communicate – an imagined preferable condition.

Linguistic information would usually be good, but certain words or information could be indications of anger so these words would not be on the prefer list. If the robot decided to dance and was told to stop dancing, the words might be lower on the prefer list. The measurement of linguistic information with respect to natural and imagined preferences is more complex than simple mappings, but great richness of feelings are possible from linguistic information.

Having a time and day of the week clock is very useful to a robot. Knowing that scheduled interactions are near allows the robot to be prepared by being fully charged and position certain. Times close to scheduled interactions could be more preferable than times farther away from being needed or farther from human contact. Day time might be imagined to be preferable than night times. If the robot has memory of times and days of human contact, with an imagined preference based on day of highest frequency of human contact, the robot could express a preference for the weekend or a family member being off from work on the same day every week.

Feelings can be "caused" or influenced by a single sensor, one of multiple sensors or only by a collection of sensors. Therefore feelings can be orthogonal if dependant on distinct collections of orthogonal sensors. The feeling of security (driven by light, sound, and motion) is not totally orthogonal to the feeling of companionship (driven by light, sound, motion, linguistic info, or nudges), but companionship and feeling hot are orthogonal.
 

I Have Moods
Moods are less specific terms used to characterize exhibited behaviors. Behaviors are not chosen because of mood, but mood can describe the interaction of unrelated feelings in behavior selection.

When all the robot’s feelings are good, the robot would be expected to be in a very good mood. Thus the robot’s behavior in the presence of a less than preferable sensory condition might be softened by other unrelated good feelings.

When all the robot feelings are bad, we would understand the robot being in a bad mood. Thus the robot’s behavior in the presence of a less than preferable sensory condition might be exaggerated by other unrelated bad feelings.

When a majority of feelings are good, can the robot still be considered to be in a good mood? Do behaviors need to be chosen with different weighting for related feelings from unrelated feelings?

Happy is the most desirable mood. When the majority of feelings are good, we would like to think that we are happy. For a robot, this could be demonstrated by tolerance for non-preferable inputs or by more demonstrable communications of a happy nature.

Unhappy is a mood where a majority of feelings are less than preferable. This mood would be characterized by selection of behaviors intended to directly correct the most important bad feeling or to communicate the bad feeling in hopes that a human will correct the situation.

Sad is sometimes associated with sensing a non-preferable condition that we are unable to influence for the positive. Unrelated behaviors might be exaggerated to communicate that the robot feels unable to fix the problems.

Depression is, clinically, an unwarranted sense of doom. Being highly contagious, it is wise to avoid depressed robots.

Playful is a mood where happiness is demonstrated and a reaction to the robot’s happiness is desired. Playful could be characterized by selecting and possibly exaggerating behaviors related to humans.

Cutter includes normal as a mood in his mood matrix. Robots might be said to have normal or abnormal reactions, but what defines a normal mood for a robot? This would seem to be a mood where random behavior selection would be most expected.

Other moods can be used to describe the specific influences of unrelated feelings in the behavior selection process

I Have Behaviors
Behaviors in a robot serve many purposes. Behaviors can be used to directly meet goals and also to communicate with humans. Direct behaviors can attempt to directly influence a specific condition, while indirect behaviors are communications to humans of an indication, to others, of feelings or mood.

I Have Goals
Goals can be static or dynamic and can be prioritized. Goals influence preferences. Preferences influence feelings. Feelings influence behaviors. Moods describe behavior selection adjectives. Behaviors influence goal achievement.
Should a robot have a goal to maximize happiness? What benefit to the owner is a maximally happy robot?

Summary
This discussion has shown a method for giving meaning and purpose to robot sensory inputs. Much investigation in the area of robot feelings and moods is needed to prepare for the generation of multi-dimensional sensing and expressing robots that appear to be near at hand.
© 1999,2009 Alan McDonley. All rights reserved.