If you really must try this on your on Amstrad CPC:
Author: scruss
-
We used to have to *wait* for our memes …
It must be Wednesday somewhere … -
“… that Chinese instruction manual font”
(this is an old post from 2021 that got caught in my drafts somehow)
Mike asked:
To which I suggested:
Not very helpful links, more of a thought-dump:
- Why does Chinese printing of Latin characters always use the same font? – writing fonts curiosity | Ask MetaFilter
- The Roman typefaces used in Chinese and Japanese text | Hacker News
First PostScript font: STSong (华文宋体) was released in 1991, making it the first PostScript font by a Chinese foundry [ref: Typekit blog — Pan-CJK Partner Profile: SinoType]. But STSong looks like Garamond(ish).
Maybe source: GB 5007.1-85 24×24 Bitmap Font Set of Chinese Characters for Information Exchange. Originally from 1985, this is a more recent version: GB 5007.1-2010: Information technology—Chinese ideogram coded character set (basic set)—24 dot matrix font.
-
The Potato
… is a thing to help soldering DIN connectors. I had some made at JLCPCB, and will have them for sale at World of Commodore tomorrow.
Sven Petersen’s “The Potato” – front. DIN7 connector not included Sven Petersen’s “The Potato” – back You can get the source from svenpetersen1965/DIN-connector_soldering-aid-The-Potato. I had the file Rev. 0/Gerber /gerber_The_Potato_noFrame_v0a.zip made, and it seems to fit connector pins well.
Each Potato is made up of two PCBs, spaced apart by a nylon washer and held together by M3 nylon screws.
-
can we…?
This is a mini celebratory post to say that I’ve fixed the database encoding problems on this blog. It looks like I will have to go through the posts manually to correct the errors still, but at least I can enter, store and display UTF-8 characters as expected.
“? µ ° × — – ½ ¾ £ é?êè”, he said with some relief.
Postmortem: For reasons I cannot explain or remember, the database on this blog flipped to an archaic character set: latin1, aka ISO/IEC 8859-1. A partial fix was effected by downloading the entire site’s database backup, and changing all the following references in the SQL:
- CHARSET=latin1 → CHARSET=utf8mb4
- COLLATE=latin1_german2_ci → COLLATE=utf8mb4_general_ci
- COLLATE utf8mb4_general_ci → COLLATE utf8mb4_general_ci
- latin1_general_ci → utf8mb4_general_ci
- COLLATE latin1_german2_ci → COLLATE utf8mb4_general_ci
- CHARACTER SET latin1 → CHARACTER SET utf8mb4
For additional annoyance, the entire SQL dump was too big to load back into phpmyadmin, so I had to split it by table. Thank goodness for awk!
#!/usr/bin/awk -f BEGIN { outfile = "nothing.sql"; } /^# Table: / { # very special comment in WP backup that introduces a new table # last field is table_name, # which we use to create table_name.sql t = $NF gsub(/`/, "", t); outfile = t ".sql"; } { print > outfile; }
The data still appears to be confused. For example, in the post Compose yourself, Raspberry Pi!, what should appear as “That little key marked “Compose”” appears as “That little key marked “Composeâ€Â”. This isn’t a straight conversion of one character set to another. It appears to have been double-encoded, and wrongly too.
Still, at least I can now write again and have whatever new things I make turn up the way I like. Editing 20 years of blog posts awaits … zzz
-
Autumn in Canada: NAPLPS
NAPLPS rendered in PP3 My OpenProcessing demo “autumn in canada”, redone as a NAPLPS playback file. Yes, it would have been nice to have outlined leaves, but I’ve only got four colours to play with that are vaguely autumnal in NAPLPS’s limited 2-bit RGB.
Played back via dosbox and PP3, with help from John Durno‘s very useful Displaying NAPLPS Graphics on a Modern Computer: Technical Note.
This file only displays 64 leaves, as more leaves caused the emulated Commodore 64 NAPLPS viewer I was running to crash.
-
The glorious futility of generating NAPLPS in 2023
Yeah! Actual real NAPLPS made by me! NAPLPS — an almost-forgotten videotex vector graphics format with a regrettable pronunciation (/nap-lips/, no really) — was really hard to create. Back in the early days when it was a worthwhile Canadian initiative called Telidon (see Inter/Access’s exhibit Remember Tomorrow: A Telidon Story) it required a custom video workstation costing $$$$$$. It got cheaper by the time the 1990s rolled round, but it was never easy and so interest waned.
I don’t claim what I made is particularly interesting:
suspiciously canadian but even decoding the tutorial and standards material was hard. NAPLPS made heavy use of bitfields interleaved and packed into 7 and 8-bit characters. It was kind of a clever idea (lower resolution data could be packed into fewer bytes) but the implementation is quite unpleasant.
A few of the references/tools/resources I relied on:
- The NAPLPS: videotex/teletext presentation level protocol syntax standard. Long. Quite dull and abstract, but it is the reference
- The 1983 BYTE Magazine article series NAPLPS: A New Standard for Text and Graphics. Also long and needlessly wordy, with digressions into extensions that were never implemented. Contains a commented byte dump of an image that explains most concepts by example
- Technical specifications for NAPLPS graphics — aka NAPLPS.ASC. A large text file explaining how NAPLPS works. Fairly clear, but the ASCII art diagrams aren’t the most obvious
- TelidonP5 — an online NAPLPS viewer. Not perfect, but helpful for proofing work
- Videotex – NAPLPS Client for the Commodore 64 Archived — a terminal for the C64 that supports (some) NAPLPS. Very limited in the size of file it can view
- John Durno has spent years recovering Telidon / NAPLPS works. He has published many useful resources on the subject
Here’s the fragment of code I wrote to generate the NAPLPS:
#!/usr/bin/env python3 # -*- coding: utf-8 -*- # draw a disappointing maple leaf in NAPLPS - scruss, 2023-09 # stylized maple leaf polygon, quite similar to # the coordinates used in the Canadian flag ... maple = [ [62, 2], [62, 35], [94, 31], [91, 41], [122, 66], [113, 70], [119, 90], [100, 86], [97, 96], [77, 74], [85, 114], [73, 108], [62, 130], [51, 108], [39, 114], [47, 74], [27, 96], [24, 86], [5, 90], [11, 70], [2, 66], [33, 41], [30, 31], [62, 35], ] def colour(r, g, b): # r, g and b are limited to the range 0-3 return chr(0o74) + chr( 64 + ((g & 2) << 4) + ((r & 2) << 3) + ((b & 2) << 2) + ((g & 1) << 2) + ((r & 1) << 1) + (b & 1) ) def coord(x, y): # if you stick with 256 x 192 integer coordinates this should be okay xsign = 0 ysign = 0 if x < 0: xsign = 1 x = x * -1 x = ((x ^ 255) + 1) & 255 if y < 0: ysign = 1 y = y * -1 y = ((y ^ 255) + 1) & 255 return ( chr( 64 + (xsign << 5) + ((x & 0xC0) >> 3) + (ysign << 2) + ((y & 0xC0) >> 6) ) + chr(64 + ((x & 0x38)) + ((y & 0x38) >> 3)) + chr(64 + ((x & 7) << 3) + (y & 7)) ) f = open("maple.nap", "w") f.write(chr(0x18) + chr(0x1B)) # preamble f.write(chr(0o16)) # SO: into graphics mode f.write(colour(0, 0, 0)) # black f.write(chr(0o40) + chr(0o120)) # clear screen to current colour f.write(colour(3, 0, 0)) # red # *** STALK *** f.write( chr(0o44) + coord(maple[0][0], maple[0][1]) ) # point set absolute f.write( chr(0o51) + coord(maple[1][0] - maple[0][0], maple[1][1] - maple[0][1]) ) # line relative # *** LEAF *** f.write( chr(0o67) + coord(maple[1][0], maple[1][1]) ) # set polygon filled # append all the relative leaf vertices for i in range(2, len(maple)): f.write( coord( maple[i][0] - maple[i - 1][0], maple[i][1] - maple[i - 1][1] ) ) f.write(chr(0x0F) + chr(0x1A)) # postamble f.close()
There are a couple of perhaps useful routines in there:
colour(r, g, b)
spits out the code for two bits per component RGB. Inputs are limited to the range 0–3 without error checkingcoord(x, y)
converts integer coordinates to a NAPLPS output stream. Best limited to a 256 × 192 size. Will also work with positive/negative relative coordinates.
Here’s the generated file:
-
SYN6288 TTS board from AliExpress
After remarkable success with the SYN-6988 TTS module, then somewhat less success with the SYN-6658 and other modules, I didn’t hold out much hope for the YuTone SYN-6288, which – while boasting a load of background tunes that could play over speech – can only convert Chinese text to speech
as bought from quason official store: SYN6288 speech synthesis module The wiring is similar to the SYN-6988: a serial UART connection at 9600 baud, plus a Busy (BY) line to signal when the chip is busy. The serial protocol is slightly more complicated, as the SYN-6288 requires a checksum byte at the end.
As I’m not interested in the text-to-speech output itself, here’s a MicroPython script to play all of the sounds:
# very crude MicroPython demo of SYN6288 TTS chip # scruss, 2023-07 import machine import time ### setup device ser = machine.UART( 0, baudrate=9600, bits=8, parity=None, stop=1 ) # tx=Pin(0), rx=Pin(1) busyPin = machine.Pin(2, machine.Pin.IN, machine.Pin.PULL_UP) def sendspeak(u2, data, busy): # modified from https://github.com/TPYBoard/TPYBoard_lib/ # u2 = UART(uart, baud) eec = 0 buf = [0xFD, 0x00, 0, 0x01, 0x01] # buf = [0xFD, 0x00, 0, 0x01, 0x79] # plays with bg music 15 buf[2] = len(data) + 3 buf += list(bytearray(data, "utf-8")) for i in range(len(buf)): eec ^= int(buf[i]) buf.append(eec) u2.write(bytearray(buf)) while busy.value() != True: # wait for busy line to go high time.sleep_ms(5) while busy.value() == True: # wait for it to finish time.sleep_ms(5) for s in "abcdefghijklmnopqrstuvwxy": playstr = "[v10][x1]sound" + s print(playstr) sendspeak(ser, playstr, busyPin) time.sleep(2) for s in "abcdefgh": playstr = "[v10][x1]msg" + s print(playstr) sendspeak(ser, playstr, busyPin) time.sleep(2) for s in "abcdefghijklmno": playstr = "[v10][x1]ring" + s print(playstr) sendspeak(ser, playstr, busyPin) time.sleep(2)
Each sound starts and stops with a very loud click, and the sound quality is not great. I couldn’t get a good recording of the sounds (some of which of which are over a minute long) as the only way I could get reliable audio output was through tiny headphones. Any recording came out hopelessly distorted:
I’m not too disappointed that this didn’t work well. I now know that the SYN-6988 is the good one to get. It also looks like I may never get to try the XFS5152CE speech synthesizer board: AliExpress has cancelled my shipment for no reason. It’s supposed to have some English TTS function, even if quite limited.
Here’s the auto-translated SYN-6288 manual, if you do end up finding a use for the thing
-
Adding speech to MMBasic
Yup, it’s another “let’s wire up a SYN6988 board” thing, this time for MMBasic running on the Armmite STM32F407 Module (aka ‘Armmite F4’). This board is also known as the BLACK_F407VE, which also makes a nice little MicroPython platform.
Uh, let’s not dwell too much on how the SYN6988 seems to parse 19:51 as “91 minutes to 20” …
Wiring
SYN6988 Armmite F4 RX PA09 (COM1 TX) TX PA10 (COM1 RX) RDY PA08 your choice of 3.3 V and GND connections, of course Where to buy: AliExpress — KAIKAI Electronics Wholesale Store : High-end Speech Synthesis Module Chinese/English Speech Synthesis XFS5152 Real Pronunciation TTS
Yes, I know it says it’s an XFS5152, but I got a SYN6988 and it seems to be about as reliable a source as one can find. The board is marked YS-V6E-V1.03, and even mentions SYN6988 on the rear silkscreen:
Code
REM SYN6988 speech demo - MMBasic / Armmite F4 REM scruss, 2023-07 OPEN "COM1:9600" AS #5 REM READY line on PA8 SETPIN PA8, DIN, PULLUP REM you can ignore font/text commands CLS FONT 1 TEXT 0,15,"[v1]Hello - this is a speech demo." say("[v1]Hello - this is a speech demo.") TEXT 0,30,"[x1]soundy[d]" say("[x1]soundy[d]"): REM chimes TEXT 0,45,"The time is "+LEFT$(TIME$,5)+"." say("The time is "+LEFT$(TIME$,5)+".") END SUB say(a$) LOCAL dl%,maxlof% REM data length is text length + 2 (for the 1 and 0 bytes) dl%=2+LEN(a$) maxlof%=LOF(#5) REM SYN6988 simple data packet REM byte 1 : &HFD REM byte 2 : data length (high byte) REM byte 3 : data length (low byte) REM byte 4 : &H01 REM byte 5 : &H00 REM bytes 6-: ASCII string data PRINT #5, CHR$(&hFD)+CHR$(dl%\256)+CHR$(dl% MOD 256)+CHR$(1)+CHR$(0)+a$; DO WHILE LOF(#5)<maxlof% REM pause while sending text PAUSE 5 LOOP DO WHILE PIN(PA8)<>1 REM wait until RDY is high PAUSE 5 LOOP DO WHILE PIN(PA8)<>0 REM wait until SYN6988 signals READY PAUSE 5 LOOP END SUB
For more commands, please see Embedded text commands
Heres the auto-translated manual for the SYN6988:
-
Markedly less success with three TTS boards from AliExpress
The other week’s success with the SYN6988 TTS chip was not repeated with three other modules I ordered, alas. Two of them I couldn’t get a peep out of, the other didn’t support English text-to-speech.
SYN6658
This one looks remarkably like the SYN6988:
Yes, I added the 6658 label so I could tell the boards apart Apart from the main chip, the only difference appears to be that the board’s silkscreen says YS-V6 V1.15 where the SYN6988’s said YS-V6E V1.02.
To be fair to YuTone (the manufacturer), they claim this only supports Chinese as an input language. If you feed it English, at best you’ll get it spelling out the letters. It does have quite a few amusing sounds, though, so at least you can make it beep and chime. My MicroPython library for the VoiceTX SYN6988 text to speech module can drive it as far as I understand it.
Here are the sounds:
Name Type Link msga Polyphonic Chord Beep msgb Polyphonic Chord Beep msgc Polyphonic Chord Beep msgd Polyphonic Chord Beep msge Polyphonic Chord Beep msgf Polyphonic Chord Beep msgg Polyphonic Chord Beep msgh Polyphonic Chord Beep msgi Polyphonic Chord Beep msgj Polyphonic Chord Beep msgk Polyphonic Chord Beep msgl Polyphonic Chord Beep msgm Polyphonic Chord Beep msgn Polyphonic Chord Beep sound101 Prompt Tone sound102 Prompt Tone sound103 Prompt Tone sound104 Prompt Tone sound105 Prompt Tone sound106 Prompt Tone sound107 Prompt Tone sound108 Prompt Tone sound109 Prompt Tone sound110 Prompt Tone sound111 Prompt Tone sound112 Prompt Tone sound113 Prompt Tone sound114 Prompt Tone sound115 Prompt Tone sound116 Prompt Tone sound117 Prompt Tone sound118 Prompt Tone sound119 Prompt Tone sound120 Prompt Tone sound121 Prompt Tone sound122 Prompt Tone sound123 Prompt Tone sound124 Prompt Tone sound201 phone ringtone sound202 phone ringtone sound203 phone ringtone sound204 phone ringing sound205 phone ringtone sound206 door bell sound207 door bell sound208 doorbell sound209 door bell sound210 alarm sound211 alarm sound212 alarm sound213 alarm sound214 wind chimes sound215 wind chimes sound216 wind chimes sound217 wind chimes sound218 wind chimes sound219 wind chimes sound301 alarm sound302 alarm sound303 alarm sound304 alarm sound305 alarm sound306 alarm sound307 alarm sound308 alarm sound309 alarm sound310 alarm sound311 alarm sound312 alarm sound313 alarm sound314 alarm sound315 alert-emergency sound316 alert-emergency sound317 alert-emergency sound318 alert-emergency sound319 alert-emergency sound401 credit card successful sound402 credit card successful sound403 credit card successful sound404 credit card successful sound405 credit card successful sound406 credit card successful sound407 credit card successful sound408 successfully swiped the card sound501 cuckoo sound502 error sound503 applause sound504 laser sound505 laser sound506 landing sound507 gunshot sound601 alarm sound / air raid siren (long) sound602 prelude to weather forecast (long) SYN-6658 Sound Reference Where I bought it: Electronic Component Module Store : Chinese-to-real-life Speech Synthesis Playing Module TTS Announcer SYN6658 of Bank Bus Broadcasting.
Auto-translated manual:
Unknown “TTS Text-to-speech Broadcast Synthesis Module”
All I could get from this one was a power-on chime. The main chip has had its markings ground off, so I’ve no idea what it is.
Red and black wires seem to be standard 5 V power. Yellow seems to be serial in, white is not connected.
Where I bought it: Electronic Component Module Store / Chinese TTS Text-to-speech Broadcast Synthesis Module MCU Serial Port Robot Plays Prompt Advertising Board
HLK-V40 Speech Synthesis Module
In theory, this little board has a lot going for it: wifi, bluetooth, commands sent by AT commands. In practice, I couldn’t get it to do a thing.
Where I bought it: HI-LINK Component Store / HLK-V40 Speech Synthesis Module TTS Pure Text to Speech Playback Hailinco AI intelligent Speech Synthesis Broadcast
I’ve still got a SYN6288 to look at, plus a XFS5152CE TTS
that’s in the mailthat may or may not be in the mail. The SYN6988 is the best of the bunch so far. -
SYN-6988 Speech with MicroPython
Full repo, with module and instructions, here: scruss/micropython-SYN6988: MicroPython library for the VoiceTX SYN6988 text to speech module
(and for those that CircuitPython is the sort of thing they like, there’s this: scruss/circuitpython-SYN6988: CircuitPython library for the YuTone VoiceTX SYN6988 text to speech module.)
I have a bunch of other boards on order to see if the other chips (SYN6288, SYN6658, XF5152) work in the same way. I really wonder which I’ll end up receiving!
Update (2023-07-09): Got the SYN6658. It does not support English TTS and thus is not recommended. It does have some cool sounds, though.
Embedded Text Command Sound Table
The github repo references Embedded text commands, but all of the sound references were too difficult to paste into a table there. So here are all of the ones that the SYN-6988 knows about:
- Name is the string you use to play the sound, eg: [x1]sound101
- Alias is an alternative name by which you can call some of the sounds. This is for better compatibility with the SYN6288 apparently. So [x1]sound101 is exactly the same as specifying [x1]sounda
- Type is the sound description from the manual. Many of these are blank
- Link is a playable link for a recording of the sound.
Name Alias Type Link sound101 sounda sound102 soundb sound103 soundc sound104 soundd sound105 sounde sound106 soundf sound107 soundg sound108 soundh sound109 soundi sound110 soundj sound111 soundk sound112 soundl sound113 soundm sound114 soundn sound115 soundo sound116 soundp sound117 soundq sound118 soundr sound119 soundt sound120 soundu sound121 soundv sound122 soundw sound123 soundx sound124 soundy sound201 phone ringtone sound202 phone ringtone sound203 phone ringtone sound204 phone rings sound205 phone ringtone sound206 doorbell sound207 doorbell sound208 doorbell sound209 doorbell sound301 alarm sound302 alarm sound303 alarm sound304 alarm sound305 alarm sound306 alarm sound307 alarm sound308 alarm sound309 alarm sound310 alarm sound311 alarm sound312 alarm sound313 alarm sound314 alarm sound315 alert/emergency sound316 alert/emergency sound317 alert/emergency sound318 alert/emergency sound401 credit card successful sound402 credit card successful sound403 credit card successful sound404 credit card successful sound405 credit card successful sound406 credit card successful sound407 credit card successful sound408 successfully swiped the card SYN-6988 Sound Reference -
Speech from Python with the SYN6988 module
I’ve had one of these cheap(ish – $15) sound modules from AliExpress for a while. I hadn’t managed to get much out of it before, but I poked about at it a little more and found I was trying to drive the wrong chip. Aha! Makes all the difference.
So here’s a short narration from my favourite Richard Brautigan poem, read by the SYN6988.
Sensitive listener alert! There is a static click midway through. I edited out the clipped part, but it’s still a little jarring. It would always do this at the same point in playback, for some reason.
The only Pythonish code I could find for these chips was meant for the older SYN6288 and MicroPython (syn6288.py). I have no idea what I’m doing, but with some trivial modification, it makes sound.
I used the simple serial UART connection: RX -> TX, TX -> RX, 3V3 to 3V3 and GND to GND. My board is hard-coded to run at 9600 baud. I used the USB serial adapter that came with the board.
Here’s the code that read that text:
#!/usr/bin/env python3 # -*- coding: utf-8 -*- import serial import time # NB via MicroPython and old too! Also for a SYN6288, which I don't have # nabbed from https://github.com/TPYBoard/TPYBoard_lib/ def sendspeak(port, data): eec = 0 buf = [0xFD, 0x00, 0, 0x01, 0x01] buf[2] = len(data) + 3 buf += list(bytearray(data, encoding='utf-8')) for i in range(len(buf)): eec ^= int(buf[i]) buf.append(eec) port.write(bytearray(buf)) ser = serial.Serial("/dev/ttyUSB1", 9600) sendspeak(ser, "[t5]I like to think [p100](it [t7]has[t5] to be!)[p100] of a cybernetic ecology [p100]where we are free of our labors and joined back to nature, [p100]returned to our mammal brothers and sisters, [p100]and all watched over by machines of loving grace") time.sleep(8) ser.close()
This code is bad. All I did was prod stuff until it stopped not working. Since all I have to work from includes a datasheet in Chinese (from here: ??????-SYN6988???TTS????) there’s lots of stuff I could do better. I used the tone and pause tags to give the reading a little more life, but it’s still a bit flat. For $15, though, a board that makes a fair stab at reading English is not bad at all. We can’t all afford vintage DECtalk hardware.
The one thing I didn’t do is used the SYN6988’s Busy/Ready line to see if it was still busy reading. That means I could send it text as soon as it was ready, rather than pausing for 8 seconds after the speech. This refinement will come later, most likely when I port this to MicroPython.
More resources:
- Board front image (labelled YS-V6E V1.02)
- Board back image
- Auto-translated programming manual (thanks, Google Translate!): SYN6988-translated.pdf
-
A terrible guide to singing with DECtalk
It’s now possible to build and run the DECtalk text to speech system on Linux. It even builds under emscripten, enabling DECtalk for Web in your browser. You too can annoy everyone within earshot making it prattle on about John Madden.
But DECTalk can sing! Because it’s been around so long, there are huge archives of songs in DECtalk format out there. The largest archive is at THE FLAME OF HOPE website, under the Dectalk section.
Building DECtalk songs isn’t easy, especially for a musical numpty like me. You need a decent grasp of music notation, phonemic/phonetic markup and patience with DECtalk’s weird and ancient text formats.
DECtalk phonemes
While DECtalk can accept text and turn it into a fair approximation of spoken English, for singing you have to use phonemes. Let’s say we have a solfège-ish major scale:
do re mi fa sol la ti do
If we’re all fancy-like and know our International Phonetic Alphabet (IPA), that would translate to:
/doʊ ɹeɪ miː faː soʊ laː tiː doʊ/
or if your fonts aren’t up to IPA:
DECtalk uses a variant on the ARPABET convention to represent IPA symbols as ASCII text. The initial consonant sounds remain as you might expect: D, R, M, F, S, L and T. The vowel sounds, however, are much more complex. This will give us a DECtalk-speakable phrase:
[dow rey miy faa sow laa tiy dow].
Note the opening and closing brackets and the full stop at the end. The brackets introduce phonemes, and the full stop tells DECtalk that the text is at an end. Play it in the DECtalk for Web window and be unimpressed: while the pitch changes are non-existent, the sounds are about right.
For more information about DECtalk phonemes, please see Phonemic Symbols Listed By Language and chapter 7 of DECtalk DTC03 Text-to-Speech System Owner’s Manual.
If you want to have a rough idea of what the phonemes in a phrase might be, you can use DECtalk’s :log phonemes option. You might still have to massage the input and output a bit, like using sed to remove language codes:
say -l us -pre '[:log phonemes on]' -post '[:log phonemes off]' -a "doe ray me fah so lah tea doe" | sed 's/us_//g;' d ' ow r ' ey m iy f ' aa) s ow ll' aa t ' iy d ' ow.
Music notation
To me — a not very musical person — staff notation looks like it was designed by a maniac. A more impractical system to indicate arrangement of notes and their durations I don’t think I could come up with: and yet we’re stuck with it.
DECtalk uses a series of numbered pitches plus durations in milliseconds for its singing mode. The notes (1–37) correspond to C2 to C5. If you’re familiar with MIDI note numbers, DECtalk’s 1–37 correspond to MIDI note numbers 36–72. This is how DECtalk’s pitch numbers would look as major scales on the treble clef:
The entire singing range of DECtalk as a C Major scale, from note 1 (C2, 65.4 Hz) to note 37 (C5, 523.4 Hz) I’m not sure browsers can play MIDI any more, but here you go (doremi-abc.mid):
and since I had to learn abc notation to make these noises, here is the source:
%abc-2.1 X:1 T:Do Re Mi C:Trad. M:4/4 L:1/4 Q:1/4=120 K:C %1 C,, D,, E,, F,,| G,, A,, B,, C,| D, E, F, G,| A, B, C D| E F G A| B c z2 |] w:do re mi fa sol la ti do re mi fa sol la ti do re mi fa sol la ti do
Each element of a DECtalk song takes the following form:
phoneme <duration, pitch number>
The older DTC-03 manual hints that it takes around 100 ms for DECtalk to hit pitch, so for each ½ second utterance (or quarter note at 120 bpm, ish), I split it up as:
- 100 ms of the initial consonant;
- 337 ms of the vowel sound;
- 63 ms of pause (which has the phoneme code “_”). Pauses don’t need pitch numbers, unless you want them to preempt DECtalk’s pitch-change algorithm.
So the three lowest notes in the major scale would sing as:
[d<100,1>ow<337,1>_<63> r<100,3>ey<337,3>_<63> m<100,5>iy<337,5>_<63>].
I’ve split them into line for ease of reading, but DECtalk adds extra pauses if you include spaces, so don’t.
The full three octave major scale looks like this:
[d<100,1>ow<337,1>_<63>r<100,3>ey<337,3>_<63>m<100,5>iy<337,5>_<63>f<100,6>aa<337,6>_<63>s<100,8>ow<337,8>_<63>l<100,10>aa<337,10>_<63>t<100,12>iy<337,12>_<63>d<100,13>ow<337,13>_<63>r<100,15>ey<337,15>_<63>m<100,17>iy<337,17>_<63>f<100,18>aa<337,18>_<63>s<100,20>ow<337,20>_<63>l<100,22>aa<337,22>_<63>t<100,24>iy<337,24>_<63>d<100,25>ow<337,25>_<63>r<100,27>ey<337,27>_<63>m<100,29>iy<337,29>_<63>f<100,30>aa<337,30>_<63>s<100,32>ow<337,32>_<63>l<100,34>aa<337,34>_<63>t<100,36>iy<337,36>_<63>d<100,37>ow<337,37>_<63>].
You can paste that into the DECtalk browser window, or run the following from the command line on Linux:
say -pre '[:PHONE ON]' -a '[d<100,1>ow<337,1>_<63>r<100,3>ey<337,3>_<63>m<100,5>iy<337,5>_<63>f<100,6>aa<337,6>_<63>s<100,8>ow<337,8>_<63>l<100,10>aa<337,10>_<63>t<100,12>iy<337,12>_<63>d<100,13>ow<337,13>_<63>r<100,15>ey<337,15>_<63>m<100,17>iy<337,17>_<63>f<100,18>aa<337,18>_<63>s<100,20>ow<337,20>_<63>l<100,22>aa<337,22>_<63>t<100,24>iy<337,24>_<63>d<100,25>ow<337,25>_<63>r<100,27>ey<337,27>_<63>m<100,29>iy<337,29>_<63>f<100,30>aa<337,30>_<63>s<100,32>ow<337,32>_<63>l<100,34>aa<337,34>_<63>t<100,36>iy<337,36>_<63>d<100,37>ow<337,37>_<63>].'
It sounds like this:
Singing a scale is hardly singing a tune, but hey, you were warned that this was a terrible guide at the outset. I hope I’ve given you a start on which you can build your own songs.
(
One detail I haven’t tried yet: the older DTC-03 manual hints that singing notes can take Hz values instead of pitch numbers, and apparently loses the vibrato effect. It’s not that hard to convert from a note/octave to a frequency. Whether this still works, I don’t know.)This post from Patrick Perdue suggested to me I had to dig into the Hz value substitution because the results are so gloriously awful. Of course, I had to write a Perl regex to make the conversions from DECtalk 1–37 sung notes to frequencies from 65–523 Hz:
perl -pwle 's|(?<=,)(\d+)(?=>)|sprintf("%.0f", 440*2**(($1-34)/12))|eg;'
(as one does). So the sung scale ends up as this non-vibrato text:
say -pre '[:PHONE ON]' -a '[d<100,65>ow<337,65>_<63>r<100,73>ey<337,73>_<63>m<100,82>iy<337,82>_<63>f<100,87>aa<337,87>_<63>s<100,98>ow<337,98>_<63>l<100,110>aa<337,110>_<63>t<100,123>iy<337,123>_<63>d<100,131>ow<337,131>_<63>r<100,147>ey<337,147>_<63>m<100,165>iy<337,165>_<63>f<100,175>aa<337,175>_<63>s<100,196>ow<337,196>_<63>l<100,220>aa<337,220>_<63>t<100,247>iy<337,247>_<63>d<100,262>ow<337,262>_<63>r<100,294>ey<337,294>_<63>m<100,330>iy<337,330>_<63>f<100,349>aa<337,349>_<63>s<100,392>ow<337,392>_<63>l<100,440>aa<337,440>_<63>t<100,494>iy<337,494>_<63>d<100,523>ow<337,523>_<63>].'
That doesn’t sound as wondrously terrible as it should, most probably as they are very small differences between each sung word. So how about we try something better? Like the refrain from The Turtles’ Happy Together, as posted on TheFlameOfHope:
say -pre '[:PHONE ON]' -a '[:nv] [:dv gn 73] [AY<400,29> KAE<200,24> N<100> T<100> SIY<400,21> MIY<400,17> LAH<200,15> VAH<125,19> N<75> NOW<400,22> BAH<200,26> DXIY<200,27> BAH<300,26> T<100> YU<600,24> FOR<200,21> AO<300,24> LX<100> MAY<400,26> LAY<900,27> F<300> _<400> WEH<300,29> N<100> YXOR<400,24> NIR<400,21> MIY<400,17> BEY<200,15> BIY<200,19> DHAX<400,22> SKAY<125,26> Z<75> WIH<125,27> LX<75> BIY<400,26> BLUW<600,24> FOR<200,21> AO<300,24> LX<100> MAY<400,26> LAY<900,27> F<300> _<300> ].'
“Refrain” is a good word, as it’s exactly what I should have done, rather than commit a terribleness on the audio by de-vibratoing it:
say -pre '[:PHONE ON]' -a '[:nv] [:dv gn 73] [AY<400,330> KAE<200,247> N<100> T<100> SIY<400,208> MIY<400,165> LAH<200,147> VAH<125,185> N<75> NOW<400,220> BAH<200,277> DXIY<200,294> BAH<300,277> T<100> YU<600,247> FOR<200,208> AO<300,247> LX<100> MAY<400,277> LAY<900,294> F<300> _<400> WEH<300,330> N<100> YXOR<400,247> NIR<400,208> MIY<400,165> BEY<200,147> BIY<200,185> DHAX<400,220> SKAY<125,277> Z<75> WIH<125,294> LX<75> BIY<400,277> BLUW<600,247> FOR<200,208> AO<300,247> LX<100> MAY<400,277> LAY<900,294> F<300> _<300> ].'
Oh dear. You can’t unhear that, can you?
-
Using the IBM Wheelwriter 10 Series II Typewriter as a printer
I can’t believe I’m having to write this article again. Back in 2004, I picked up an identical model of typewriter on Freecycle, also complete with the parallel printer option board. The one I had back then had an incredible selection of printwheels. And I gave it all away! Aaargh! Why?
Last month, I ventured out to a Value Village in more affluent part of town. On the shelf for $21 was a familiar boxy shape, another Wheelwriter 10 Series II Typewriter model 6783. This one also has the printer option board, but it only has one printwheel, Prestige Elite. It powered on enough at the test rack enough for me to see it mostly worked, so I bought it.
Once I got it home, though, I could see it needed some work. The platen was covered in ink and correction fluid splatters. Worse, the carriage would jam in random places. It was full of dust and paperclips. But the printwheel did make crisp marks on paper, so it was worth looking at a repair.
Thanks to Phoenix Typewriter’s repair video “IBM Wheelwriter Typewriter Repair Fix Carriage Carrier Sticks Margins Reset Makes Noise”, I got it going again. I’m not sure how much life I’ve got left in the film ribbon, but for now it’s doing great.
Note that there are lots of electronics projects — such as tofergregg/IBM-Wheelwriter-Hack: Turning an IBM Wheelwriter into a unique printer — that use an Arduino or similar to drive the printer. This is not that (or those). Here I’m using the Printer Option board plus a USB to Parallel cable. There’s almost nothing out there about how these work.
Connecting the printer
You’ll need a USB to Parallel adapter, something like this: StarTech 10 ft USB to Parallel Printer Adapter – M/M. You need the kind with the big Centronics connector, not the 25-pin D-type. My one (old) has a chunky plastic case that won’t fit into the port on the Wheelwriter unless you remove part of the cable housing. On my Linux box, the port device is /dev/usb/lp0. You might want to add yourself to the lp group so you can send data to the printer without using sudo:
sudo adduser user lp
The Wheelwriter needs to be switched into printer mode manually by pressing the Code + Printer Enable keys.
Printer Codes
As far as I can tell, the Wheelwriter understands a subset of IBM ProPrinter codes. Like most simple printers, most control codes start with an Esc character (ASCII 27). Lines need to end with both a Carriage Return (ASCII 13) and newline (ASCII 10). Sending only CRs allows overprinting, while sending only newlines gives stair-step output.
The codes I’ve found to work so far are:
- Emphasized printing — Esc E
- Cancel emphasized printing — Esc F
(double strike printing [Esc G, Esc H] might also work, but I haven’t tried them) - Continuous underscore — Esc – 1
- Cancel continuous underscore — Esc – 0
(technically, these are Esc – n, where n = ASCII 1 or 0, not character “1” or “0”. But the characters seem to work, too) - 7/72″ inch line spacing — Esc 1
- Set text line spacing to n / 72″ units — Esc A n
(this one really matters: if you send “6” (ASCII 66) instead of 6, you’ll get 66/72 = 11/12″ [= 28.3 mm] line spacing instead of the 1/12″ [= 2.1 mm] you expected) - Start text line spacing — Esc 2
Text functions such as italics and extended text aren’t possible with a daisywheel printer. You can attempt dot-matrix graphics using full stops and micro spacing, but I don’t want to know you if you’d try.
Sending codes from the command line
echo is about the simplest way of doing it. Some systems provide an echo built-in that doesn’t support the -e (interpret special characters) and -n (don’t send newline) options. You may have to call /usr/bin/echo instead.
To print emphasized:
echo -en 'well \eEhello\eF there!\r\n' > /dev/usb/lp0
which prints
well hello there!
To print underlined:
echo -en 'well \e-1hello\e-0 there!\r\n' > /dev/usb/lp0
which types
well hello there!
To set the line spacing to a (very cramped) 1/12″ [= 2.1 mm] and print a horizontal line of dots and a vertical line of dots, both equally spaced (if you’re using Prestige Elite):
echo -en '\eA\x05\e2\r\n..........\r\n.\r\n.\r\n.\r\n.\r\n.\r\n.\r\n.\r\n.\r\n.\r\n\r\n' > /dev/usb/lp0
Character set issues
IBM daisywheels typically can’t represent the whole ASCII character set. Here’s what an attempt to print codes 33 to 126 in Prestige Elite looks like:
The following characters are missing:
< > \ ^ ` { | } ~
So printing your HTML or Python is right out. FORTRAN, as ever, is safe.
Prestige Elite is a 12 character per inch font (“12 pitch”, or even “Elite” in typewriter parlance) that’s mostly been overshadowed by Courier (typically 10 characters per inch) in computer usage. This is a shame, as it’s a much prettier font.
Related, yet misc.
There’s very little out there about printing with IBM daisywheels. This is a dump of the stuff I’ve found that may help other people:
- Wheelwriter 10 Series II Typewriter 6783 Operator’s Guide (Internet Archive; nothing about the printer option)
- IBM didn’t make too many daisywheel printers. Two models were the 5216 Wheelprinter and 5223 Wheelprinter E, possibly intended for larger IBM machines. The 5216 Wheelprinter looks like it may use similar character codes. Here’s a (Printer Definition File?? An IBM thing, I think) for that printer that might help the interested: ibm5216_pdf
- The IBM 6901 “Personal Typing System” included a daisywheel printer (Correcting Wheelwriter Printer 6902) that looks almost identical to a Wheelwriter 10 Series II with the keyboard lopped off. But I can find nothing about it.
- Word Perfect 5 may have had a driver for this typewriter/printer, but that doesn’t help me with the control codes.
-
The Joy of BirdNetPi
I don’t think I’ve had as much enjoyment for a piece of software for a very long time as I’ve had with BirdNET-Pi. It’s a realtime acoustic bird classification system for the Raspberry Pi. It listens through a microphone you place somewhere near where you can hear birds, and it’ll go off and guess what it’s hearing, using a cut-down version of the BirdNET Sound ID model. It does this 24/7, and saves the samples it hears. You can then go to a web page (running on the same Raspberry Pi) and look up all the species it has heard.
Our Garden
Not very impressive, kind of overgrown, in the wrong part of town. Small, too. But birds love it. At this time of year, it’s alive with birds. You can’t make them out, but there’s a pair of Rose-breasted Grosbeaks happily snacking near the top of the big tree. There are conifers next door too, so we get birds we wouldn’t expect.
We are next to two busy subway/train stations, and in between two schools. There’s a busy intersection nearby, too. Consequently, the background noise is horrendous
What I used
This was literally “stuff I had lying around”:
- Raspberry Pi 3B+ (with power supply, case, thermostatic fan and SD card)
- USB extension cable (this, apparently, is quite important to isolate the USB audio device from electrical noise)
- Horrible cheap USB sound card: I paid about $2 for a “3d sound” thing about a decade ago. It records in mono. It works. My one is wrapped in electrical tape as the case keeps threatening to fall off, plus it has a hugely bright flashing LED the is annoying.
- Desktop mic (circa 2002): before video became a thing, PCs had conferencing microphones. I think I got this one free with a PC over 20 years ago. It’s entirely unremarkable and is not an audiophile device. I stuck it out a back window and used a strip of gaffer tape to stop bugs getting in. It’s not waterproof, but it didn’t rain the whole week it was out the window.
- Raspberry Pi OS Lite 64-bit. Yes, it has to be 64 bit.
- BirdNET-Pi installation on top.
I spent very little time optimizing this. I had to fiddle with microphone gain slightly. That’s all.
What I heard
To the best of my knowledge, I have actual observations of 30 species, observed between May 7th and May 16th 2023:
American Goldfinch, American Robin, Baltimore Oriole, Blue Jay, Cedar Waxwing, Chimney Swift, Clay-colored Sparrow, Common Grackle, Common Raven, Gray Catbird, Hermit Thrush, House Finch, House Sparrow, Killdeer, Mourning Dove, Nashville Warbler, Northern Cardinal, Northern Parula, Orchard Oriole, Ovenbird, Red-winged Blackbird, Ring-billed Gull, Rose-breasted Grosbeak, Ruby-crowned Kinglet, Song Sparrow, Veery, Warbling Vireo, White-throated Sparrow, White-winged Crossbill, Wood Thrush
I’ll put the recordings at the end of this post. Note, though, they’re noisy: Cornell Lab quality they ain’t.
What I learned
This is the first time that I’ve let an “AI” classifier model run with no intervention. If it flags some false positives, then it’s pretty low-stakes when it’s wrong. And how wrong did it get some things!
allegedly a Barred Owl, this is clearly a two-stroke leafblower Black-Billed Cuckoo? How about kids playing in the school yard? Emergency vehicles are Common Loons now, according to BirdNetPi Police cars at 2:24 am are Eastern Screech-Owls. I wonder if we could use this classifier to detect over-policed, under-served neighbourhoods? Great Black-backed Gulls, or kids playing? The latter Turkey Vulture? How about a very farty two-stroke engine in a bicycle frame driving past?
(This thing stinks out the street, blecch)There are also false positive for Trumpeter Swans (local dog) and Tundra Swans (kids playing). These samples had recognizable voices, so I didn’t include them here.
The 30 positive species identifications
Many of these have a fairly loud click at the start of the sample, so mind your ears.
American Goldfinch
American Robin
Baltimore Oriole
(I dunno what’s going on here; the next sample’s much more representative)
Blue Jay
Cedar Waxwing
Chimney Swift
Clay-colored Sparrow
Common Grackle
Common Raven
Gray Catbird
Hermit Thrush
House Finch
House Sparrow
Killdeer
Mourning Dove
Nashville Warbler
Northern Cardinal
Hey, we’ve got both of the repetitive songs that these little doozers chirp out all day. Song 1:
and song 2 …
Northern Parula
Orchard Oriole
Ovenbird
Red-winged Blackbird
Ring-billed Gull
Rose-breasted Grosbeak
Ruby-crowned Kinglet
Song Sparrow
Veery
Warbling Vireo
White-throated Sparrow
White-winged Crossbill
Wood Thrush
Boring technical bit
BirdNetPi doesn’t create combined spectrograms with audio as a single video file. What it does do is create an mp3 plus a PNG of the spectrogram. ffmpeg can make a nice not-too-large webm video for sharing:
ffmpeg -loop 1 -y -i 'birb.mp3.png' -i 'birb.mp3' -ac 1 -crf 48 -vf scale=720:-2 -shortest 'birb.webm'
(Minor update, May 2024: the original project maintainer has moved on, so I changed the project link to point to Nachtzuster/BirdNET-Pi: A realtime acoustic bird classification system for the Raspberry Pi 5, 4B 3B+ 0W2 and more. Built on the TFLite version of BirdNET.)
-
Edwin Morgan’s “The Computer’s First Christmas Card”
as performed by the flite speech synthesizer and some shell scripts
The Computer’s First Christmas card Not quite as good as having the late Prof. Morgan recite it to you himself — one of the few high points of my school experience — but you take what you can get in this economy.
MERRY CHRISTMAS
*** FORTRAN STOP -
MicroPython on the Seeed Studio Wio Terminal: it works!
A while back, Seeed Studio sent me one of their Wio Terminal devices to review. It was pretty neat, but being limited to using Arduino to access all of it features was a little limiting. I still liked it, though, and wrote about it here: SeeedStudio Wio Terminal
Wio Terminal, doing a thing There wasn’t any proper MicroPython support for the device as it used a MicroChip/Atmel SAMD51 ARM® Cortex®-M4 micro-controller. But since I wrote the review, one developer (robert-hh) has worked almost entirely solo to make SAMD51 and SAMD21 support useful in mainline MicroPython.
Hey! Development is still somewhere between “not quite ready for prime time” and “beware of the leopard”. MicroPython on the SAMD51 works remarkably well for supported boards, but don’t expect this to be beginner-friendly yet.
I thought I’d revisit the Wio Terminal and see what I could do using a nightly build (downloaded from Downloads – Wio Terminal D51R – MicroPython). Turns out, most of the board works really well!
What doesn’t work yet
- Networking/Bluetooth – this is never going to be easy, especially with Seeed Studio using a separate RTL8720 SoC. It may not be entirely impossible, as previously thought, but so far, wifi support seems quite far away
- QSPI flash for program storage –
this is not impossible, just not implemented yetthis works now too, but it’s quite slow since it relies on a software SPI driver. More details: samd51: MicroPython on the Seeed Wio Terminal · Discussion #9838 · micropython - RTC –
this is a compile-time option, but isn’t available on the stock images. Not all SAMD51 boards have a separate RTC oscillator, and deriving the RTC from the system oscillator would be quite wobbly.RTC works now! It may even be possible to provide backup battery power and have it keep time when powered off. VBAT / PB03 / SPI_SCK is broken out to the 40-pin connector.
What does work
- Display – ILI9341 320×240 px, RGB565 via SPI
- Accelerometer – LIS3DHTR via I²C
- Microphone – analogue
- Speaker – more like a buzzer, but this little PWM speaker element does allow you to play sounds
- Light Sensor – via analogue photo diode
- IR emitter – PWM, not tied to any hardware protocol
- Internal LED – a rather faint blue thing, but useful for low-key signalling
- Micro SD Card – vi SPI. Works well with MicroPython’s built-in virtual file systems
- Switches and buttons – three buttons on the top, and a five-way mini-joystick
- I²C via Grove Connector – a second, separate I²C channel.
I’ll go through each of these here, complete with a small working example.
Inside the remarkably hard-to-open Wio Terminal LED
Let’s start with the simplest feature: the tiny blue LED hidden inside the case. You can barely see this, but it glows out around the USB C connector on the bottom of the case.
- MicroPython interfaces: machine.Pin, machine.PWM
- Control pin: Pin(“LED_BLUE”) or Pin(15), or Pin(“PA15”): any one of these would work.
Example: Wio-Terminal-LED.py
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-LED.py - blink the internal blue LED # scruss, 2022-10 # -*- coding: utf-8 -*- from machine import Pin from time import sleep_ms led = Pin("LED_BLUE", Pin.OUT) # or Pin(15) or Pin("PA15") try: while True: led.value(not led.value()) sleep_ms(1200) except: led.value(0) # turn it off if user quits exit()
IR LED
I don’t have any useful applications of the IR LED for device control, so check out Awesome MicroPython’s IR section for a library that would work for you.
- MicroPython interfaces: machine.PWM
- Control pin: Pin(“PB31”)
Example: Wio-Terminal-IR_LED.py
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-IR_LED.py - blink the internal IR LED # scruss, 2022-10 # -*- coding: utf-8 -*- # Hey! This is a completely futile exercise, unless you're able # to see into the IR spectrum. But we're here to show you the pin # specification to use. For actual useful libraries to do stuff with # IR, take a look on https://awesome-micropython.com/#ir # So this is a boring blink, 'cos we're keeping it short here. # You might be able to see the LED (faintly) with your phone camera from machine import Pin, PWM from time import sleep_ms ir = PWM(Pin("PB31")) # "IR_CTL" not currently defined try: while True: ir.duty_u16(32767) # 50% duty ir.freq(38000) # fast flicker sleep_ms(1200) ir.duty_u16(0) # off sleep_ms(1200) except: ir.duty_u16(0) # turn it off if user quits exit()
Buttons
There are three buttons on top, plus a 5-way joystick on the front. Their logic is inverted, so they read 0 when pressed, 1 when not. It’s probably best to use machine.Signal with these to make operation more, well, logical.
- MicroPython interface: machine.Signal (or machine.Pin)
- Control pins: Pin(“BUTTON_3”) or Pin(92) or Pin(PC28) – top left; Pin(“BUTTON_2”) or Pin(91) or Pin(PC27) – top middle; Pin(“BUTTON_1”) or Pin(90) or Pin(PC26) – top right; Pin(“SWITCH_B”) or Pin(108) or Pin(PD12) – joystick left; Pin(“SWITCH_Y”) or Pin(105) or Pin(PD09) – joystick right; Pin(“SWITCH_U”) or Pin(116) or Pin(PD20) – joystick up; Pin(“SWITCH_X”) or Pin(104) or Pin(PD08) – joystick down; Pin(“SWITCH_Z”) or Pin(106) or Pin(PD10) – joystick button
Example: Wio-Terminal-Buttons.py
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-Buttons.py - test the buttons # scruss, 2022-10 # -*- coding: utf-8 -*- # using Signal because button sense is inverted: 1 = off, 0 = on from machine import Pin, Signal from time import sleep_ms pin_names = [ "BUTTON_3", # Pin(92) or Pin(PC28) - top left "BUTTON_2", # Pin(91) or Pin(PC27) - top middle "BUTTON_1", # Pin(90) or Pin(PC26) - top right "SWITCH_B", # Pin(108) or Pin(PD12) - joystick left "SWITCH_Y", # Pin(105) or Pin(PD09) - joystick right "SWITCH_U", # Pin(116) or Pin(PD20) - joystick up "SWITCH_X", # Pin(104) or Pin(PD08) - joystick down "SWITCH_Z", # Pin(106) or Pin(PD10) - joystick button ] pins = [None] * len(pin_names) for i, name in enumerate(pin_names): pins[i] = Signal(Pin(name, Pin.IN), invert=True) while True: for i in range(len(pin_names)): print(pins[i].value(), end="") print() sleep_ms(100)
Buzzer
A very quiet little PWM speaker.
- MicroPython interfaces: machine.PWM
- Control pin: Pin(“BUZZER”) or Pin(107) or Pin(“PD11”)
Example: Wio-Terminal-Buzzer.py
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-Buzzer.py - play a scale on the buzzer with PWM # scruss, 2022-10 # -*- coding: utf-8 -*- from time import sleep_ms from machine import Pin, PWM pwm = PWM(Pin("BUZZER", Pin.OUT)) # or Pin(107) or Pin("PD11") cmaj = [262, 294, 330, 349, 392, 440, 494, 523] # C Major Scale frequencies for note in cmaj: print(note, "Hz") pwm.duty_u16(32767) # 50% duty pwm.freq(note) sleep_ms(225) pwm.duty_u16(0) # 0% duty - silent sleep_ms(25)
Light Sensor
This is a simple photo diode. It doesn’t seem to return any kind of calibrated value. Reads through the back of the case.
- MicroPython interfaces: machine.ADC
- Control pin: machine.ADC(“PD01”)
Example code: Wio-Terminal-LightSensor.py
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-LightSensor.py - print values from the light sensor # scruss, 2022-10 # -*- coding: utf-8 -*- from time import sleep_ms from machine import ADC # PD15-22C/TR8 photodiode light_sensor = ADC("PD01") while True: print([light_sensor.read_u16()]) sleep_ms(50)
Microphone
Again, a simple analogue sensor:
- MicroPython interfaces: machine.ADC
- Control pin: machine.ADC(“MIC”)
Example: Wio-Terminal-Microphone.py
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-Microphone.py - print values from the microphone # scruss, 2022-10 # -*- coding: utf-8 -*- from time import sleep_ms from machine import ADC mic = ADC("MIC") while True: print([mic.read_u16()]) sleep_ms(5)
Grove I²C Port
The Wio Terminal has two Grove ports: the one on the left (under the speaker port) is an I²C port. As I don’t know what you’ll be plugging in there, this example does a simple bus scan. You could make a, appalling typewriter if you really wanted.
- MicroPython interfaces: machine.I2C (channel 3), machine. Pin
- Control pins: scl=Pin(“SCL1”), sda=Pin(“SDA1”)
Example: Wio-Terminal-Grove-I2C.py
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-Grove-I2C.py - show how to connect on Grove I2C # scruss, 2022-10 # -*- coding: utf-8 -*- from machine import Pin, I2C # NB: This doesn't do much of anything except list what's # connected to the left (I²C) Grove connector on the Wio Terminal i2c = I2C(3, scl=Pin("SCL1"), sda=Pin("SDA1")) devices = i2c.scan() if len(devices) == 0: print("No I²C devices connected to Grove port.") else: print("Found these I²C devices on the Grove port:") for n, id in enumerate(devices): print(" device", n, ": ID", id, "(hex:", hex(id) + ")")
LIS3DH Accelerometer
This is also an I²C device, but connected to a different port (both logically and physically) than the Grove one.
- MicroPython interfaces: machine.I2C (channel 4), machine. Pin
- Control pins: scl=Pin(“SCL0”), sda=Pin(“SDA0”)
- Library: from MicroPython-LIS3DH, copy lis3dh.py to the Wio Terminal’s small file system. Better yet, compile it to mpy using mpy-cross to save even more space before you copy it across
Example: Wio-Terminal-Accelerometer.py (based on tinypico-micropython/lis3dh library/example.py)
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-Accelerometer.py - test out accelerometer # scruss, 2022-10 # -*- coding: utf-8 -*- # based on # https://github.com/tinypico/tinypico-micropython/tree/master/lis3dh%20library import lis3dh, time, math from machine import Pin, I2C i2c = I2C(4, scl=Pin("SCL0"), sda=Pin("SDA0")) imu = lis3dh.LIS3DH_I2C(i2c) last_convert_time = 0 convert_interval = 100 # ms pitch = 0 roll = 0 # Convert acceleration to Pitch and Roll def convert_accell_rotation(vec): x_Buff = vec[0] # x y_Buff = vec[1] # y z_Buff = vec[2] # z global last_convert_time, convert_interval, roll, pitch # We only want to re-process the values every 100 ms if last_convert_time < time.ticks_ms(): last_convert_time = time.ticks_ms() + convert_interval roll = math.atan2(y_Buff, z_Buff) * 57.3 pitch = ( math.atan2((-x_Buff), math.sqrt(y_Buff * y_Buff + z_Buff * z_Buff)) * 57.3 ) # Return the current values in roll and pitch return (roll, pitch) # If we have found the LIS3DH if imu.device_check(): # Set range of accelerometer (can be RANGE_2_G, RANGE_4_G, RANGE_8_G or RANGE_16_G). imu.range = lis3dh.RANGE_2_G # Loop forever printing values while True: # Read accelerometer values (in m / s ^ 2). Returns a 3-tuple of x, y, # z axis values. Divide them by 9.806 to convert to Gs. x, y, z = [value / lis3dh.STANDARD_GRAVITY for value in imu.acceleration] print("x = %0.3f G, y = %0.3f G, z = %0.3f G" % (x, y, z)) # Convert acceleration to Pitch and Roll and print values p, r = convert_accell_rotation(imu.acceleration) print("pitch = %0.2f, roll = %0.2f" % (p, r)) # Small delay to keep things responsive but give time for interrupt processing. time.sleep(0.1)
SD Card
- MicroPython interfaces: machine.SPI (channel 6), machine.Pin, machine.Signal
- Control Pins: Pin(“SD_SCK”), Pin(“SD_MOSI”), Pin(“SD_MISO”) for SD access. Pin(“SD_DET”) is low if an SD card is inserted, otherwise high
- Library: copy sdcard.py from micropython-lib to the Wio Terminal’s file system.
Rather than provide a small canned example (there’s one here, if you must: Wio-Terminal-SDCard.py) here’s my boot.py startup file, showing how I safely mount an SD card if there’s one inserted, but keep booting even if it’s missing:
# boot.py - MicroPython / Seeed Wio Terminal / SAMD51 import sys sys.path.append("/lib") import machine import gc import os import sdcard machine.freq(160000000) # fast but slightly jittery clock gc.enable() # mount SD card if there's one inserted try: sd_detected = machine.Signal( machine.Pin("SD_DET", machine.Pin.IN), invert=True, ) sd_spi = machine.SPI( 6, sck=machine.Pin("SD_SCK"), mosi=machine.Pin("SD_MOSI"), miso=machine.Pin("SD_MISO"), baudrate=40000000, ) sd = sdcard.SDCard(sd_spi, machine.Pin("SD_CS")) if sd_detected.value() == True: os.mount(sd, "/SD") print("SD card mounted on /SD") else: raise Exception("SD card not inserted, can't mount /SD") except: print("SD card not found")
ILI9341 Display
I’m going to use the library rdagger/micropython-ili9341: MicroPython ILI9341Display & XPT2046 Touch Screen Driver because it’s reliable, and since it’s written entirely in MicroPython, it’s easy to install. It’s not particularly fast, though.
The Wio Terminal may have an XPT2046 resistive touch controller installed, but I haven’t been able to test it. There are LCD_XL, LCD_YU, LCD_XR and LCD_YD lines on the schematic that might indicate it’s there, though.
- MicroPython interfaces: machine.SPI (channel 7), machine.Pin.
- Control Pins: Pin(“LCD_SCK”), Pin(“LCD_MOSI”), Pin(“LCD_MISO”). Pin(“LED_LCD”) is the backlight control
- Library: copy ili9341.py from rdagger /micropython-ili9341 to the Wio Terminal’s file system.
This demo draws rainbow-coloured diamond shapes that change continuously.
Example: Wio-Terminal-Screen.py
# MicroPython / Seeed Wio Terminal / SAMD51 # Wio-Terminal-Screen.py - output something on the ILI9341 screen # scruss, 2022-10 # -*- coding: utf-8 -*- from time import sleep from ili9341 import Display, color565 from machine import Pin, SPI def wheel565(pos): # Input a value 0 to 255 to get a colour value. # The colours are a transition r - g - b - back to r. # modified to return RGB565 value for ili9341 - scruss (r, g, b) = (0, 0, 0) if (pos < 0) or (pos > 255): (r, g, b) = (0, 0, 0) if pos < 85: (r, g, b) = (int(pos * 3), int(255 - (pos * 3)), 0) elif pos < 170: pos -= 85 (r, g, b) = (int(255 - pos * 3), 0, int(pos * 3)) else: pos -= 170 (r, g, b) = (0, int(pos * 3), int(255 - pos * 3)) return (r & 0xF8) << 8 | (g & 0xFC) << 3 | b >> 3 # screen can be a little slow to turn on, so use built-in # LED to signal all is well led = Pin("LED_BLUE", Pin.OUT) backlight = Pin("LED_LCD", Pin.OUT) # backlight is not a PWM pin spi = SPI( 7, sck=Pin("LCD_SCK"), mosi=Pin("LCD_MOSI"), miso=Pin("LCD_MISO"), baudrate=4000000 ) display = Display(spi, dc=Pin("LCD_D/C"), cs=Pin("LCD_CS"), rst=Pin("LCD_RESET")) display.display_on() display.clear() led.on() # shotgun debugging, embedded style backlight.on() # use default portrait settings: x = 0..239, y = 0..319 dx = 3 dy = 4 x = 3 y = 4 i = 0 try: while True: # display.draw_pixel(x, y, wheel565(i)) display.fill_hrect(x, y, 3, 4, wheel565(i)) i = (i + 1) % 256 x = x + dx y = y + dy if x <= 4: dx = -dx if x >= 234: dx = -dx if y <= 5: dy = -dy if y >= 313: dy = -dy except: backlight.off() led.off() display.display_off()
-
broken box is crying
poor boxy -
MicroPython MIDI mayhem (kinda)
It pleased me to learn about umidiparser – MIDI file parser for Micropython. Could I use my previous adventures in beepy nonsense to turn a simple MIDI file into a terrible squeaky rendition of same? You betcha!
MIDI seems to be absurdly complex. In all the files I looked at, there didn’t seem to be much of a standard in encoding whether the note duration was in the NOTE_ON event or the NOTE_OFF event. Eventually, I managed to fudge a tiny single channel file that had acceptable note durations in the NOTE_OFF events. Here is the file:
I used the same setup as before:
piezo between pins 26 and 23 With this code:
# extremely crude MicroPython MIDI demo # MicroPython / Raspberry Pi Pico - scruss, 2022-08 # see https://github.com/bixb922/umidiparser import umidiparser from time import sleep_us from machine import Pin, PWM # pin 26 - GP20; just the right distance from GND at pin 23 # to use one of those PC beepers with the 4-pin headers pwm = PWM(Pin(20)) led = Pin('LED', Pin.OUT) def play_tone(freq, usec): # play RTTL/midi notes, also flash onboard LED # original idea thanks to # https://github.com/dhylands/upy-rtttl print('freq = {:6.1f} usec = {:6.1f}'.format(freq, usec)) if freq > 0: pwm.freq(int(freq)) # Set frequency pwm.duty_u16(32767) # 50% duty cycle led.on() sleep_us(int(0.9 * usec)) # Play for a number of usec pwm.duty_u16(0) # Stop playing for gap between notes led.off() sleep_us(int(0.1 * usec)) # Pause for a number of usec # map MIDI notes (0-127) to frequencies. Note 69 is 440 Hz ('A4') freqs = [440 * 2**((float(x) - 69) / 12) for x in range(128)] for event in umidiparser.MidiFile("lg2.mid", reuse_event_object=True): if event.status == umidiparser.NOTE_OFF and event.channel == 0: play_tone(freqs[event.note], event.delta_us)
This isn’t be any means a general MIDI parser, but is rather specialized to play monophonic tunes on channel 0. The result is gloriously awful:
apologies to LG