Embroidery format HUS

The educational technology and digital learning wiki
Jump to navigation Jump to search

Hus embroidery format is fairly simple, but it relies on a defunct proprietary compression schema. Which makes it one of the least accessible formats for open source.


HUS Header

Type Bytes Value Description
`u32` 4 5B AF C8 00 Magic number prefix.
`u32` 4 0-n Number of stitches.
`u32` 4 0-n Number of colors.
`u16` 2 extend pos_x extend
`u16` 2 extend pos_y extend
`u16` 2 extend neg_x extend
`u16` 2 extend neg_x extend
`u32` 4 0-n Command offset
`u32` 4 0-n x_offset
`u32` 4 0-n y_offset
`char` 8 8 byte string Unknown string value.
`u16` 2 0 unknown, but only seen 0.
`u16` (number of colors) * 2 0-29 Magic number HUS colors.
`compressed bytes` x_offset-command_offset compressed Command compressed bytes.
`compressed bytes` y_offset-x_offset compressed X compressed bytes.
`compressed bytes` EOF-y_offset compressed Y compressed bytes.


Compression Overview

The chief issue are the compressed bytes. These are compressed in a format put out by Greenleaf in the mid 1990s. The compression lives on in a .dll file called al21mfc.dll in a compiled form within windows. Those who purchased the library before the company changed hands repeatedly as the owners died, got with their purchase a copy of obfuscated c++ code. With all functions renamed with _### style functions, and some cryptic macros etc. However, Greenleaf was not the original author of this software. They licensed that compression from Robert Jung who wrote the ARJ compression format a year later. The compression there is almost identical except that the table size is 1k with greenleaf rather than 22k as with ARJ and greenleaf has a different value for one of the tables, at 511 rather than 510. The ARJ compression scheme was under a patent in 1992 which has since expired. This ultimately means that HUS embroidery was encumbered by obfuscated essential source code, a patent, several licensing agreements, and extreme obscurity, which then died in obscurity. As no known copies of the original source code seem to exist.

The compression scheme used in ARJ is as follows. Like most LZSS based schemes a series of 9 bit numbers are produced. These numbers are between 0 and 511. Numbers within the byte range of 0-255 are simply appended to the output whereas numbers beyond this range can either exit (510) or declare a certain number of repeated characters. These characters are somewhere back in the lookup window which in this case is 1024 bytes. When this event is triggered a read is made from the bit window and the distance back is looked up in a huffman table produced during the block read.

The first attempt to read the 9 digit token causes a block read. This is 16 bit count of the number of tokens contained. This is followed by a series of 3 huffman tables. The huffman tables are the pre-token table which is used to produce the second table. The second table is the character table which is made by doing repeated lookups with the first table to uncompress the character tables lengths. The third table is the distance table which provides the distance back in the stack for the replacement.

The tables are stored by merely providing their lengths. The shortest elements go first, with ties for lengths going to lowest index. So lengths like "1, 2" with 1 1-bit code and 2 2-bits code or "1, 0, 4" with 1 1-bit code and 4 3-bits codes can be built, if the tables are balanced. These will produce the same tables each time. This algorithmically built huffman table is then used to decode bit codes. In the case of the pre-token table, it builds the character table through a series of different values where values above 3 have 3 subtracted to them and they are put directly into the character length table. And values less 3 mean a certain number of zeros are added to the character length table, as those values are never encoded. After the pre-token table builds the character length values the character table is built. The distance_backwards table is merely loaded and built. There are then used to decode various variable length elements which are then appended to the output in the manner outlined in paragraph 2.

See Also