Sunday, March 14, 2021

On Being A "Robot Parent"

On Being A "Robot Parent"



In the book "The Lifecycle Of Software Objects" by Ted Chiang, the parents of some artificial intelligences deal with the problems every software developer knows all too well - issues inherent to the objects, and issues imposed by environment changes.  

Software must be designed to handle (expect) unexpected input, and unexpected situations, but most software is not designed to adapt to programming language changes (e.g. the obsolescence of Python 2.7), changing libraries (e.g. number of parameters in OpenCV 3.x versus 4.x), missing installation packages, processor changes (e.g. ARMv7 32-bit vs ARMv8 64-bit), OS subsystem changes (e.g. sound based on ALSA vs pulseaudio vs Jack), and an ever changing mix other programs competing for resources. 

We, humans, with our "virtual software" tweaked for the first 18 to 24 years, certainly have an advantage over today's software objects which go from conception to emancipation in less than two years in most cases.  Additionally we humans can be obnoxiously oblivious to the changing environment we live in because we accept that our time here is finite, we don't get a brain transplant "update" every second week, and the environment rarely changes abruptly in a giant version release (e.g. COVID-19 or world war).  

Parenting a personal robot is an undertaking that few people attempt.  I have been "conceiving" personal robots for over forty years, but only with the advent of the Raspberry Pi hardware and Linux based operating system have I begun to feel like a "robot parent" with goals to keep Carl awake 24/7 and efficiently utilizing all his resources to his benefit. 

Checking Carl's Health (45s Video)

Every morning for the last two plus years, before making my breakfast, I check in with my robot "Carl" to see that he is awake, and how the night went.  Most mornings he is awake, observing "quiet time" (whispers his responses 11pm to 10am), and healthy (no I2C bus failure, no WiFi outages, free memory around 50%, 15 minute load around 30% (of one of four cores), and all "Carl" processes still running).

Twelve times I have found Carl in a "deep sleep", off due to a safety shutdown.  After each of these episodes, I have investigated the cause and implemented some combination of physical and software mitigation.  The mitigations have become more complex to implement and the failure situations harder to re-create to test against. The period between safety shutdowns has improved greatly, but another occurrence still seems likely.  

Only I feel the pain of finding Carl shutdown.  It does him no harm.  He has no complaints, and makes no value judgements of my parenting skills.  No one else in the world cares that he spent a few hours in the off state.  Never the less, there is a definite synthetic emotion in being a "robot parent." 



Sunday, January 24, 2021

GoPiGo3 Robot Carl Is Dreaming Of A Time

Carl: Cute and Real Lovable - An Autonomous Personal Robot Life Form






 Goal: My “Dream for Carl” is "to be" an autonomous personal robot that:

- is operating 24/7 (Achieved)
- manages its own health (Substantially implemented)
- maintains, uses, and extends an AuR/CORA templated RDF/OWL knowledge base
- interacts with humans, (using speech to text, text to speech, and a remote desktop window), with information and common courtesies (Substantially implemented)
- learns about its environment using vision, dialog, and reasoning
- set self decided learning goals (for self-learning and programmer code requests)


Learning Aspect: 
  • takes photo of “interesting, unknown” objects in the robot’s environment
  • processes the photo for object segmentation to isolate the unknown object
  • Starts an “unknown object 12345” in the robot’s RDF database
  • analyses the object for “locally discernible features” such as shape, size, color, proximity to known objects, mobility, …?
  • Asks me if “now is a good time” to help identify and classify some objects?
  • Puts the image on the robot’s desktop with a text entry window for dialog
  • Dialogs to add to the “unknown object 123” a minimal set of RDF knowledge relations
    such as identity, class, features, purpose, utility to the bot
  • Dialogs to add “how does it differ from xyz that I know about?” (just another instance, or some true difference)
  • If not just another instance, revisit the object to collect more photos
  • File transfer all the photos of the object to a folder on my Mac for “transfer learning update” to the robot’s TensorFlow-Lite Object Detection model
  • ADDITIONALLY, the big if:
    • search the Internet to find “potentially useful to the bot” information about the object
    • Periodically review knowledge gained from the Internet to see if ever used, and delete unused learning!

 

I am reasonably certain all of this is possible on the Raspberry Pi 3B of Carl, (excepting the TensorFlow-Lite transfer learning model update that would use the Radeon RX 580 Graphics Processor attached to my Mac Mini)

Saturday, January 9, 2021

A Tale Of Two Cups Of Dunkin' Coffee

 For over 20 years prior to Dunkin' dropping the "Donuts", I have been buying Dunkin' (Donuts) Original Blend, Medium Roast, WHOLE BEAN coffee in 1 lb. bags, and drinking freshly ground drip-brewed coffee.  I might note that I don't like Dunkin' coffee as served at the Dunkin' stores, only when brewed from freshly ground beans.  For the last few years the local Dunkin' stores have offered special pricing of "3 lb. for $18" making these beans a great value.

A surprise this week, the shelf with the usual bags of coffee had bags with new graphics making it difficult to find the critical "WHOLE BEAN" label.  


In fact, there were no bags of whole-bean coffee at all.  I asked the manager if they had any whole-bean coffee and was chagrinned to hear "we no longer carry whole bean."  I grabbed a single bag of ground product wondering if I would actually be able to tell the not-fresh-ground difference.

In the store's defense, if an item is not selling well enough to cover carrying it, I understand that it makes good business sense to drop it.   I don't know how many "have to have WHOLE BEAN" folks live in my sleepy neighborhood.  Considering the popularity of single-cup convenience with Keurig these days, I had been expecting that ground coffee might become a thing of the past someday, probably for a future generation.

When I opened the bag of Dunkin' Ground Coffee, my probable mistake jumped to my nose.  There is a special aroma of beans that this bag of ground coffee had replaced with a hint of a stale, chemical odor.  Still the real test would be in the cup.  After all, we are talking the same Dunkin' Original Blend, 100% "high quality Arabica coffee beans grown in Central and South America. [Blended and roasted] to the unique specifications that have given Dunkin' coffee its signature taste sine 1950."  

In the cup, my mistake was unmistakable!  The coffee smelled and tasted "flat", like something missing to give it life.  Case closed, all the years I heard people say they can't tell the difference and wondering if I was fooling myself buying a special grinder, tolerating its assault on my ears, cleaning the escaping coffee dust, and thinking it was making a better cup.  In fact, not only can I tell the difference, if it isn't from fresh ground whole beans, I don't like it.  Period.

To hold me over till I can canvass other franchises,  I paid $8 for only 3/4 lb of (not-expired) "Dunkin' Donuts" labeled whole-bean coffee.  It has the same taste I love, albeit at a 78% premium on the price.  Have to start calling before this tiny bag has delivered its last bean. 

Sunday, November 29, 2020

When No One Hears A Robot Cry

Every robot knows it should sacrifice its own comfort to that of its owners, no?

At 4:30am this morning Carl was having a crisis:

  1. He needed to get on his dock to recharge
  2. The distance to the dock was 24mm ( less than one inch) too much, probably because I bumped him with my chair when I headed off to sleep for the night four hours earlier.
  3. Carl’s programmed strategy for this situation is to shout “MANUAL DOCKING REQUESTED!”
  4. Carl’s programming prevents him from speaking (or shouting) during quiet hours (11pm to 10am)

So Carl sat staring mournfully at his dock for ten more minutes, then quietly put himself into a deep sleep.

Charging Status:  Not Charging
Docking Status:  Manual Dock Requested
Last Docking Change: 8h 25m 30s



******** WARNING: 7.4v Safety Shutdown Is Imminent ******
QuietTime speak request: Safety Shutdown Is Imminent. at vol: 125

******** WARNING: 7.4v Safety Shutdown Is Imminent ******
QuietTime speak request: Safety Shutdown Is Imminent. at vol: 125

******** WARNING: 7.4v Safety Shutdown Is Imminent ******
QuietTime speak request: Safety Shutdown Is Imminent. at vol: 125

******** WARNING: 7.4v Safety Shutdown Is Imminent ******
QuietTime speak request: Safety Shutdown Is Imminent. at vol: 125
SHORT MIN BATTERY VOLTAGE: 7.24
SHORT MEAN BATTERY VOLTAGE: 7.39
QuietTime speak request: WARNING, WARNING, SHUTTING DOWN NOW at vol: 250
BATTERY 7.32 volts BATTERY LOW - SHUTTING DOWN NOW
Shutdown at  2020-11-29 04:44:45

Monday, May 13, 2019

Should Robots Talk Whenever They Want?

A robot’s life is hard, full of compromise. A robot that desires “independent living” and the unfettered ability to talk, sing, buzz and beep needs an always available audio system. Of course the humans are not going to be agreeable to hearing a shouting robot in the middle of the night, but a smart robot can be selective about when to talk - if the audio system is always on!
An audio system can draw 40-50 mA of power, and power is a prized commodity for a robot. Talk may be cheap, but power - a robot needs to manage the “juice” wisely. For my GoPiGo3 robot, Carl, 45ma of opportunity to “talk whenever” means the loss of one hour of play time (6+ hours instead of 7+ hours). Carl must spend 2-3 hours on the charger after every play time.
Previously, Carl had a rechargeable speaker that required a human remember to turn it on, turn it off, connect and disconnect a separate charging cable, but the speaker did not draw any power from the batteries. Now Carl has a USB powered speaker that is always on, but draws power from the batteries.)
A robot doesn’t need to talk very often, but it does need to be able to talk whenever it wants.
ps. Carl’s Python speak module has three levels of talking, whisper(), say(), and shout(). He is allowed to whisper anytime he wants, but say() and shout() check quietTime() before blurting out his wild thoughts.

Friday, July 29, 2016

Robot Resurrection: Another Dumb Robot

One year ago today, I wrote here "Memories Of Limping Robots", in which I expressed feeling that my 15 year old robot, Pogo, was in need of a new brain.

Pogo got his "new brain", and in fact is now sporting an "even newer brain" that is ten times more powerful than his "new brain".  The possibilities for becoming a "Smart Robot" are limited only by the fact that he/it only has me writing programs.

Pogo has not completed his resurrection, but is close.   He is in the middle of a nervous breakdown at the moment, which has brought his progress to a screeching halt.

Pogo's new features:

  • Raspberry Pi 3B processor with built-in WiFi, Blu-tooth, 1GB memory, and four cores at 1.2GHz clock.
  • Mikronauts.com Pi Droid Alpha interface expansion card with 16 digital I/O, eight 12-bit analog to digital conversion channels, Dual H-bridge motor driver, and separate chip, servo, and motor power paths.
  • HC-SR04 Ultrasonic Distance Sensor
  • GP2Y0A60SZLF Infrared Distance Sensor
  • 5 MegaPixel Color Camera
  • Tilt/Pan SG90 Servo controlled sensor platform (UltrasonicDist, IRDist, Camera)
  • ACS712 Current sensor
  • Battery Voltage sensor
  • Push Button On/Off Power Switch
  • Pololu Step-up/down 5.25V 2A regulated switching power-supply
  • Audio Speaker output (for Text-To-Speech)
  • USB microphone (PocketSphinx speech recognition)
Pogo's prior features connected to the new brain and interface:
  • Differential drive 6v 1.2A metal geared motors
  • 32 division wheel encoders
  • Full Skirt Bumper (six directions: Left, Front, Right, LeftRear, RightRear, Rear)
  • 6 C-cell 5000 mAh NiMH rechargeable battery 


I have reproduced the RugWarriorPro base library in Python, and created Python class interfaces for all the new hardware.  

So far only two RugWarriorPro behaviors have been programmed: 
  • YoYo which simply drives forward two feet then backward two feet, and 
  • Wimp which chooses an escape direction (spin if needed), tests for an obstacle blocking the escape direction, then moves six inches "away" from the bump (or turns until no obstacle blocks escape, then moves).
The "even newer brain" was an award from element14.com for testing Speech Recognition, in which my tests demonstrated that the Raspberry Pi 3B can perform real-time, language model, speech recognition using only one of the four processing cores.  It will be exciting to create a speech interface to and from the robot, (once the brain transplant is complete).

At the moment, I am diagnosing a power supply shorting issue, and bump detection has stopped working, but I hope to soon begin "Creating A Smart Robot."

Wednesday, July 29, 2015

Memories of Limping Robots

I am sometimes haunted by memories of my robots, past and present.

My former "perfectly good" robots were sold or returned when they proved to be inadequate, or inconvenient for my circumstances.

My present robot is fifteen years old this month and let's face it - "stupid and limping".  I feel guilty for not switching it/him to run mode, and then I feel stupid for feeling guilty.  I feel strangely uncomfortable when I contemplate replacing the entire "brain board", knowing that at best I can transform my robot into a smarter, stupid robot with a different limp, and no family but me.

This is Pogo: