CBC Radio All in A Day
CBC Radio All in A Day
A simple light sensor provides a measure of light incident on the photo-resistor. If one were to move a finger to touch and cover the sensor, the reading many range from 0 to 1023. In principle, that would differentiate more than a thousand shades.
In practice, I could reliably position a finger at perhaps 4 different points, from fully covering the sensor to about 2cm away, with 2 differentiable intermediate positions.
The code in the sensor library allows you to set the number of levels you wish to detect, say n, and returns a number between 0 and (n-1). It assumes that an analog sensor is connected to one of Arduino’s analog pins. Of course, the code would work with any analog sensor. The code recalibrates when the readings are held for a period of time (currently set to 10 seconds) that is not the normal at rest value (1 or 0).
To use, include the header file at the beginning of the code:
and then create the object, s:
AnalogSensor s(A0, 500, 0, 0);
where the four arguments are: pin number, the refractory period in milliseconds, normal high or low (1 or 0), debug mode (0 if no debug messages are needed). The refractory period is used so that the reading is stable within that period.
To take a reading,
reading = s.Level(3);
where, in this example, the variable 3 is the number of levels one wishes to distinguish. The number returned is 0, 1 and 2.
There is a corresponding digital sensor function. Normally, one would just take the reading directly. My code for the digital sensor allows for a refractory period to be used, again, in order to obtain a stable reading. To set up, use the same argument signature as above,
DigitalSensor s(0, 500, 0, 0);
To take a reading,
reading = s.Read();
Code repository: https://github.com/AbilitySpectrum/ArduinoAAC
I have used both the wRobot Light Sensor as well as the Minimum Luminence Light Sensor.
International Society for Augmentative and Alternative Communication met in Lisbon from July 21-24, 2014. This is a premier forum for researchers, practitioners and users. In my technical career, I have never been to a forum that is at once technical and practical, serious and fun at the same time.
The next conference will take place in Toronto, from August 6th to 13th, 2016.
We click all the time.
On a computer, it is usually a mouse click. Or, a two-finger tap. A click is tied to a visual element on the screen. When activated, a click triggers an action. Often, a click is the last of a series of actions, such as SEND when filling in a form, or SELECT when we are finished browsing the options.
For some people with impairment, a click could be the only action possible. For instance, if someone only have mobility with one finger and limited arm movement, then it may not be possible to type on a keyboard. A click is the one and only input he can have for a computing device.
With one click, what we have is a one bit communication channel. It is not much. It can effect everything that any other interface can, albeit slowly. Typically, the interface needs to be modified to present options sequentially, and cyclically, for the user to pick.
The switches available in the market are robust. However, they may not work in some circumstances:
– they may be too stiff to click
– they may require too much ‘travel’
– they may have a form factor that makes it difficult to position
If we look at a ‘switch’ as a sensing device, then any number of sensors are available. The alternatives are limitless.
Usually size as a small disc, this sensor needs physical contact. It is a digital sensor. So it does not matter how much contact is made. Small and light in weight, it can be easily positioned close to the flesh without triggering so the travel can be adjusted.
A light sensor has much the same characteristics as the touch sensor: small, light-weight and easy to mount. It measures the amount of light falling on the 2-3mm photo-resistor so is analog. This means that different levels of light can be detected. Depending on the type of sensor, this could correspond to a number between 0-1023 or a larger range.
Practically, I have found that it is possible to detect 3 or 4 distinct levels as one places a finger to the sensor. This means that it is possible for the user to communicate more than one bit of information at any single instance. This opens up new interface alternatives. For example, we could differentiate between these intents: click, forwards and backwards. It does presupposes that there is fine motor control to position one’s finger at a certain distance from the sensor.
There are now commercially available gyroscope and accelerometer combo boards that are small and inexpensive. So called 6DOs, they detect (the board’s) angular momentum and acceleration, each in 3 dimensions. A sensor fusion algorithm combines the data from the gyros and accelerometers into usable position information. The math is complex but newer boards have on board processing power so the output can be used with relative ease.
The main advantage of using a motion sensor is that it can be mounted on the person, whether on a finger, or, on a hairband for the head. Thus it does not have to be accurately positioned as the externally mounted sensors such as capacitive touch or light.
We have deployed the touch sensor for a user who uses it with a computer (via an on screen scanning keyboard). This user had been using a conventional switch but her finger was getting weaker. The touch sensor is mounted on the old switch, with the thumb placed in a corner of the sensor board, about 2 mm from the sensing area. Instead of pressing the switch before, the thumb moves laterally to the touch sensor. Compared to the switch, the touch sensor is much lighter to use. An added benefit is that the user now employs a different muscle for control.
For a second user with locked-in syndrome (LIS), she has use of only one toe and one finger. A conventional switch is installed for the toe, to activate the call bell. Regularly, the user needs to change the positions of her limbs. The switch’s positioning is not so flexible. We installed a light sensor for her finger. This is an IR sensor so the light source comes from the sensor itself and can operate independently of ambient lighting. The sensor requires only that an object is in its light path for detection (the sensitive zone is between 2cm and 5cm from the sensor). In the case of our user, her finger has minimal travel so the sensor’s light path is less than 2mm from her finger’s rest position.
The light sensor is connected to an Arduino board. As a programmable device, the ‘click’ can be made to be more than a binary affair. For example, a short linger over the sensor could indicate one type of click while a long linger another. The exact criteria for short/long can be adjusted to suit. This opens up the possibility, for instance, for a long click to activate the call bell and a short click to serve as select for a tablet. The sensor becomes a multi-functional input device.
For a third user, I use a gyroscope mounted on a hairband to function as computer mouse control. The user is accustomed to using the computer with a camera-based control. This works well for her when she’s in front of the computer. She also uses a tablet using limited arm movement (using a stylus). This requires the tablet to be precisely placed in front of her. With a head mounted mouse, she could now use the tablet in bed, and in limited lighting conditions.
More are detailed elsewhere on these pages. The source code is in https://github.com/AbilitySpectrum.