Thanks to Raph Levien and Electronic Arts for inspiring this post!

As a thin layer on top of assembly language, C’s integer arithmetic APIs have always been minimal, effectively just mapping the underlying assembly opcodes to the C arithmetic operators. In addition, while unsigned arithmetic can safely overflow in C, signed arithmetic overflow is considered undefined behaviour, and UB can end up in heartache for C developers.

More modern languages like Rust have a much richer integer API, however. By default, using the standard addition and subtraction operators in debug mode, integer APIs will panic! on overflow or underflow. The same operators will wrap in two’s-complement mode in release mode, though the behaviour is considered defined. If the developer wants to specify carrying, wrapping, checked, or saturating operations, APIs for each of these modes are available.

We don’t have these convenient APIs available in C yet (see the epilogue for some nuance), but it would be great to have them. Given that signed arithmetic overflow is undefined behaviour, can we build a function with the following C signature that works?

bool will_add_overflow(int32_t a, int32_t b);

All modern machines use two’s complement representation for negative integers, but C was developed when computing was experimenting with other forms of representing signed integers. In this post, we will safely assume that all processors we’re targetting on are not using one of the alternative forms. If you find yourself programming for esoteric machines, you may wish to consult your local manual or guru.

A valid, quick-and-dirty solution to this would be to use integer promotion and add 64-bit signed integers instead, checking to see if the result is within range of an int16_t:

bool will_add_overflow_64bit(int32_t a, int32_t b) {
    // a and b are promoted to 64-bit signed integers
    int64_t result = (int64_t)a + b;
    if (result < INT32_MIN || result > INT32_MAX) {
        return true;
    return false;

No undefined behaviour! It also has the advantage of being easily read and obviously correct. But this required sign-extending two 32-bit numbers and performing two 64-bit additions:

        movsxd  rax, edi
        movsxd  rcx, esi
        add     rcx, rax
        movsxd  rax, ecx
        cmp     rax, rcx
        setne   al

We can do better by taking advantage of the fact that on two’s-complement machines, addition is bitwise-identical between signed and unsigned numbers so long as you ignore carry, overflow, underflow and any other flags. In addition, the C specification (C99 ¶2) guarantees that the bit pattern will be preserved on a two’s-complement system.

We know that unsigned overflow is not UB, and we know that we can only overflow if a > 0 and b > 0, and we can only underflow if a < 0 and b < 0. If either a or b is zero, we’re safe. We also know that adding two positive integers must result in a positive result if no overflow occurred. For two negative integers, the result must also be negative. If we find that the sign of the sum does not match the sign expected, we’ve wrapped around!

bool will_add_overflow_if(int32_t a, int32_t b) {
    // Explicitly convert to uint32_t and then back
    int32_t c = (int32_t)((uint32_t)a + (uint32_t)b);
    if (a > 0 && b > 0 && c < 0) {
        return true;
    if (a < 0 && b < 0 && c >= 0) {
        return true;
    return false;

And we get a fairly hefty assembly representation:

        lea     ecx, [rsi + rdi]
        test    edi, edi
        jle     .LBB2_3
        test    esi, esi
        jle     .LBB2_3
        mov     al, 1
        test    ecx, ecx
        jns     .LBB2_3
        test    esi, edi
        sets    dl
        test    ecx, ecx
        setns   al
        and     al, dl

This is arguably a bit worse, as now we have a branch in the mix. But we can start to see a pattern here:

> 0> 0< 0true
< 0< 0>= 0true

In two’s-complement, the expression x < 0 is equivalent to the expression (x & 0x80000000) == 0x80000000. Similarly, x >= 0 is equivalent to (x & 0x80000000) == 0.

Let’s create a NEG macro with the above expression and reproduce our pseudo-truth table in code. Note that we’ll also collapse the if statements into a single boolean expression so we can eliminate those branches:

bool will_add_overflow_expression(int32_t a_, int32_t b_) {
    // Explicitly work with uint32_t in this function
    uint32_t a = (uint32_t)a_, b = (uint32_t)b_;
    uint32_t c = (uint32_t)a + (uint32_t)b;
    #define NEG(x) (((uint32_t)(x) & 0x80000000) == 0x80000000)
    return ((!NEG(a) && !NEG(b) && NEG(c)) 
        || (NEG(a) && NEG(b) && !NEG(c)));
    #undef NEG

This is looking better, but because we’re using short-circuiting logic, those branches are still there: we still have a jump!

        mov     eax, esi
        or      eax, edi
        setns   dl
        mov     ecx, esi
        add     ecx, edi
        sets    al
        and     al, dl
        test    edi, edi
        jns     .LBB3_3
        test    al, al
        jne     .LBB3_3
        test    esi, esi
        sets    dl
        test    ecx, ecx
        setns   al
        and     al, dl

We can get rid of the branches by using non-short-circuiting bitwise operators:

bool will_add_overflow_bitwise(int32_t a_, int32_t b_) {
    uint32_t a = (uint32_t)a_, b = (uint32_t)b_;
    uint32_t c = (uint32_t)a + (uint32_t)b;
    #define NEG(x) (((uint32_t)(x) & 0x80000000) == 0x80000000)
    return ((!NEG(a) & !NEG(b) & NEG(c)) 
        | (NEG(a) & NEG(b) & !NEG(c)));
    #undef NEG

And now it’s starting to look pretty compact (though we can do better):

        lea     ecx, [rsi + rdi]
        mov     eax, esi
        or      eax, edi
        and     esi, edi
        xor     eax, esi
        not     eax
        and     eax, ecx
        xor     eax, esi
        shr     eax, 31

Notice that the assembly gives us a bit of a hint here that repeated use of our macro isn’t actually necessary. The sign bit we’re interested in isn’t tested until the end of the function! Because we’re testing the same bit in every part of the expression, and bits in a given position only interact with other bits in the same position, we can pull that bit test out of the whole expression:

bool will_add_overflow_bitwise_2(int32_t a_, int32_t b_) {
    uint32_t a = (uint32_t)a_, b = (uint32_t)b_;
    uint32_t c = (uint32_t)a + (uint32_t)b;
    #define NEG(x) (((uint32_t)(x) & 0x80000000) == 0x80000000)
    return NEG((~a & ~b & c) | (a & b & ~c));
    #undef NEG

We can also make use of the knowledge that testing the sign bit is the same as an unsigned shift right:

bool will_add_overflow_bitwise_3(int32_t a_, int32_t b_) {
    uint32_t a = (uint32_t)a_, b = (uint32_t)b_;
    uint32_t c = (uint32_t)a + (uint32_t)b;
    return ((~a & ~b & c) | (a & b & ~c)) >> 31;

Not too bad! But let’s revisit the truth table and instead use the value of the sign bit directly. What we see is that a and b need to be the same value, and c needs to be the opposite value:


This truth table shows that what we ultimately want to test is this:

(a == 1 && b == 1 && c == 0) || (a == 0 && b == 0 && c == 1)

… but with a bit of work, we can simplify this down to two shorter expression candidates:

(a == b) && (a == !c)
(c == !a) && (c == !b)

For bit twiddling like we’re doing here, xor (^) can work like a “not-equals” operator (outputs 1 iff the inputs are 0,1 or 1,0), which means we can re-write our two expressions like so:

~(a ^ b) & (c ^ a)
(c ^ a) & (c ^ b)

By looking at those two options, is there a hint that one might be cheaper to implement? Let’s plug both into the compiler and see what we get!

bool will_add_overflow_optimized_a(int32_t a_, int32_t b_) {
    uint32_t a = (uint32_t)a_, b = (uint32_t)b_;
    uint32_t c = (uint32_t)a + (uint32_t)b;
    return (~(a ^ b) & (c ^ a)) >> 31;

bool will_add_overflow_optimized_b(int32_t a_, int32_t b_) {
    uint32_t a = (uint32_t)a_, b = (uint32_t)b_;
    uint32_t c = (uint32_t)a + (uint32_t)b;
    return ((c ^ a) & (c ^ b)) >> 31;

And the resulting compiled versions:

        lea     eax, [rsi + rdi]
        xor     eax, edi
        mov     ecx, edi
        xor     ecx, esi
        not     ecx
        and     eax, ecx
        shr     eax, 31
        lea     eax, [rsi + rdi]
        xor     edi, eax
        xor     eax, esi
        and     eax, edi
        shr     eax, 31

We have a clear winner here: the compiler can do a much better job with (c ^ a) & (c ^ b). This is most likely because of the common sub-expression and the removal of the bitwise-not operator.

We can also confirm that there’s no known undefined behaviour by compiling it with clang’s -fsanitize=undefined feature. No UB warnings are printed, which means no UB was detected!


While this is the fastest we can get with bog-standard C99, this isn’t necessarily the best we can do.

Rust makes use of the compiler intrinsics to access the overflow flag of the processor directly:

pub fn add(a: i32, b: i32) -> bool {

        add     edi, esi
        seto    al

It turns out that both GCC and LLVM have C intrinsics that you can use. While they are non-portable to some compilers, they drastically simplify the assembly output!

bool will_add_overflow_intrinsic(int32_t a, int32_t b) {
    int32_t result;
    return __builtin_add_overflow(a, b, &result);

And, just like with the Rust compiler above, this generates optimal assembly!

        add     edi, esi
        seto    al

Not to worry about this being so deeply compiler-specific for now, however. This will be standardized in C23 with the addition of the functions in the stdckdint.h header.

A full suite of tests to explore the solutions is available on Godbolt or as a Gist.

Read full post

I was hoping to make more progress on self-hosting my scripting language (Kalos) this week, but I’m running out of steam because I think I coded myself into a corner. This is not a post where I’ve got everything figured out, but instead I’m taking a few moments to re-hash where things are at and figure out a plan for the future.

My original plan for this language was to offer a Python-like experience with minimal resource requirements: it should be able to run on an AVR, on bare metal, or even as part of a DOS executable. I had originally planned to support compilation on those devices, and even built a zero-allocation parser. The runtime is lightweight, integers are a configurable size, and strings are even optional. I believe it’ll be a great option on lower-end devices where Python is just too heavy.

Over the last week I started work to extract a small piece of the parser that deals with KIDL (example of KIDLE below), the part of the language that glues it to your C code. This is currently part of the existing parser, and the only piece that I could think of to carve off on the slow march to true self-hosting.

idl {
    prefix "kalos_module_idl_";
    dispatch name;

    module builtin {
        fn println(s: string) = kalos_idl_compiler_println;
        fn print(s: string) = kalos_idl_compiler_print;
        fn log(s: string) = kalos_idl_compiler_log;

Where I’m stuck now is that the language is almost good enough to parse itself, but I’m finding lots of corner cases and papercut bugs that make it less-than-ideal. For example, I’m finding that the parser as-is is not quite good enough to handle function calls that are deeply nested in expressions.

I also ended up hacking in some dynamic-dispatch objects that help with not having classes, but that’s not a long-term thing that I want to support in lieu of a proper object/class system.

Eventually I’ll have to commit to rewriting the whole parser in Kalos, but as the title of the post suggests, I’m stuck in a potential energy well where the next steps are going to be difficult. The current parser is written in C and – while the code is pretty clean – it’s a lot of work to make changes to it. It’s going to take some time and effort to add support for things like classes, tear-off functions, etc, and being allocation-free doesn’t make any of this easy.

My mistake was being too ambitious and going right for C as the bootstrap, rather than something higher level. I should have started with Python as the bootstrap!

So I need to gather enough energy to choose and work on one of the following paths:

  1. Commit to rewriting the parser in Kalos, maybe after adding support for hashtables/dicts to the language. The language spec is still small enough that I could port the current parser. Debugging is very difficult, and you need to ensure that you’re running the code while developing it to discover if you’ve accidentally stepped on one of the many landmines. Once I have the parser/compiler in a higher-level language like Kalos itself, these landmines will be much easier to fix!
  2. Rewrite the parser in Python, knowing that I’ll have to rewrite it in Kalos later. This might not be terrible because the language is supposed to be python-like and there might be a mechanical translation route available. The thought of writing more code to throw away doesn’t fill me with a lot of joy, but it might just be what I have to do.
  3. Scrap the hand-rolled parser and switch to something like Lemon. We’re already using the amazing re2c to write the lexer, so adding another tool isn’t a bad idea. Again, we’re putting in a bunch of effort knowing that this will be tossed away later, but maybe there’s a middle ground like just having Lemon build an AST, then have Kalos script generate the bytecode?
Read full post

This is the last part of a three-part series covering the odyssey of getting a new coffeemaker, learning BTLE and how it works, reverse-engineering the Bluetooth interface and Android applications for the coffeemaker, writing a Rust-based CLI interface, and finally, hooking it all up to a GitHub actions bot that lets you brew a coffee just by filing an issue!

In part 2 we got the command-line application working, and now it’s time to connect the dots and build a secure, yet web-accessible interface.

We could choose a standard web host, add some sort of authentication on top of it, build the right web tooling to integrate with the nice command-line application we built, and all the associated security so random people can’t brew coffee. But as you’ve guessed from the title of these posts, we’re going hook this command-line app into a private GitHub repo as our “interface”.

Making use of GitHub issues for automating weird things isn’t new, but I think this is the first time you can make coffee from it!

Getting Started

Here’s our goal:

  1. We want to allow users to brew a coffee from a GitHub issue, which will be pre-populated from a number of pre-defined templates
  2. The issue will contain part of the command-line that we want to run, and we’ll need to validate that it’s reasonable and correct, and that nobody is trying to inject any sort of “funny business” to break/backdoor the runner
  3. We don’t want coffee brewers to have to chase down the status of the brewing operation, so we’re going to make use of issue comments as our basic UI. The user will be able to follow the progress of their coffee inside of the issue, and get a notification when it’s done.

This is what the user will see just before they brew the coffee:

The first question you might have is how we’re going to talk to a Bluetooth coffeemaker from GitHub’s system. This part turns out to be pretty easy: we can use GitHub self-hosted runners as a backdoor into the coffeemaker’s physical location! By running this on a computing device that has a Bluetooth radio in proximity to the coffeemaker, we can send commands to it in response to events occurring in a repo. Conveniently the Raspberry Pi 3 Model B and Pi 4 both support Bluetooth, but in our case we’re going to be using a spare MacBook that’s kicking around.

First thing, we need to create a new runner on GitHub for our project, and then set up the runner on the MacBook:

curl -O -L${version}/actions-runner-osx-x64-${version}.tar.gz
tar xzf ./actions-runner-osx-x64-${version}.tar.gz
./ --url --token ${token}

GitHub actions are pretty flexible and we have a huge number of events that can trigger them. In our case, we want the creation of a new issue to trigger a run, so our trigger becomes:

    types: [opened]

We’ll pull in the create-or-update-comment action from peter-evans for updating the user about the status of their coffee:

  - name: Add comment
    uses: peter-evans/create-or-update-comment@v2

And once the coffee is brewed or the process has failed for some other reason, we’ll want to close that issue, so we’re going to pull in peter-evans/close-issue for this:

  - name: Close issue
    if: always()
    uses: peter-evans/close-issue@v2

The actual brewing part will be pretty easy as well, but it’s going to require use to fetch the text of the issue and use that to create the command-line to run.

Let’s take a look at the event information that GitHub provides to us in $GITHUB_EVENT_PATH. There’s a lot that GitHub provides for us in this file, and this particular one is trimmed down significantly:

    "action": "opened",
    "issue": {
        "body": "This is my issue body comment\r\n",
        "title": "This is my issue title!",

jq is one the best tools for integrating JSON APIs with shell scripts, so we’ll make use of that. We’ll create a small test JSON file called test.json that contains just the interesting subset of what’s available in the file at $GITHUB_EVENT_PATH:

    "action": "opened",
    "issue": {
        "body": "This is my issue body comment\r\n",
        "title": "This is my issue title!"

First, we can test extraction of the issue body:

$ jq -r '.issue.body' < test.json
This is my issue body comment


That worked, but we’ve got some extra whitespace there. We can trim that with another jq command, gsub. By replacing leading or trailing whitespace (gsub("^\\s+|\\s+$";"")) with nothing, we can get just the text of the comment:

$ jq -r '.issue.body|gsub("^\\s+|\\s+$";"")' < test.json
This is my issue body comment


Extracting the Command-Line

Now what we want to do is allow the user to specify the command-line in the issue, but ensure that they can’t run anything nefarious on the runner. We developed a command-line cappuccino recipe in part 2 that we ran like this:

cargo run -- brew --beverage cappuccino --coffee 40 --milk 100 --taste extrastrong

So let’s extract out everything past the hyphens and make that the required input in our newly-filed issues:

brew --beverage cappuccino --coffee 40 --milk 100 --taste extrastrong

To work on extraction, we’ll update the issue.body field in our test.json file to this partial command-line:

    "action": "opened",
    "issue": {
        "body": "brew --beverage cappuccino --coffee 40 --milk 100 --taste extrastrong\r\n"

Since we are creating a valid partial command-line for our brewing app, we can make use of that fact that we know the exact structure. In this case, we know we want it to:

  1. Start with the subcommand brew
  2. Next, contain the beverage to brew with --beverage <something>
  3. Finally, contain a list of beverage parameters which are limited to coffee, milk, hotwater, taste, and temperature. Each parameter is separated from its value by a space (ie: --coffee 100), and is either a number or an enumeration value (ie: --taste strong).

We can then build a regular expression that will be limited to just the arguments we’re allowing here. We’ll use the \w character class as it’s a close match to the values required by our parameters.

We could go further in validating the --beverage parameter, or the values for the ingredients, but we know that those are carefully checked in the application and we’ll let the application handle the validation:

^brew --beverage \w+( --(coffee|milk|taste|hotwater|temperature) \w+)*$

Now we can put it all together and extract the command-line like so (note that we have to escape the \w patterns in regular expression):

CMDLINE=$(jq -r '.issue.body|gsub("^\\s+|\\s+$";"")|select(test("^brew --beverage \\w+( --(coffee|milk|taste|hotwater|temperature) \\w+)*$"))' < $GITHUB_EVENT_PATH)

And that’s probably the only tricky part of the process. Now we can build our GitHub action, piece-by-piece.

Building the Workflow

First, the preamble that tells GitHub where and when to run the action, and what permissions it has:

name: Brew

    types: [opened]

    runs-on: self-hosted
      issues: write


Our first step will drop a comment into the issue so the user knows things are happening

      - name: Add initial comment
        uses: peter-evans/create-or-update-comment@v2
        id: comment
          issue-number: $
          body: ' - [X] Getting ready to brew your ☕️!'

We’ll then install the longshot executable from cargo and let them know it was done:

      - name: Install longshot
        run: cargo install --root /tmp/longshot -- longshot 
      - name: Update comment
        uses: peter-evans/create-or-update-comment@v2
          issue-number: $
          comment-id: $
          body: ' - [X] Installed the `longshot` executable'

Next, we’ll process the requested brew operation using the jq incantation from earlier. This step will create a that we’ll use to update the comment, as well as cmdline.txt that will be used to execute our brewing operation later on:

      - name: Process the request
        run: |

            jq -r '
              .issue.body |
              gsub("^\\s+|\\s+$";"") |
                test("^brew --beverage \\w+( --(coffee|milk|taste|hotwater|temperature) \\w+)*$")
              )' < $GITHUB_EVENT_PATH
          echo Command-line we parsed was: $CMDLINE
          if [[ "$CMDLINE" == "" ]]; then
            echo " - [X] Couldn't parse the command line from your comment? 🤔" > $OUTFILE
            exit 1
          echo -n ' - [X]' Running brew command: \`$CMDLINE\` > $OUTFILE
          echo ' [(Log here)]('${GITHUB_REPOSITORY}'/actions/runs/'${GITHUB_RUN_ID}')' >> $OUTFILE
          echo "/tmp/longshot/bin/longshot $CMDLINE --device-name $" > cmdline.txt

We then update the comment with

      - name: Update comment
        uses: peter-evans/create-or-update-comment@v2
          issue-number: $
          comment-id: $

And run the brewing command:

      - name: Brew coffee
        run: |
          echo '<details><summary>Log</summary><pre>' >
          sh -c "`cat cmdline.txt`" | tee -a
          echo '</pre></details>' >>
          echo '✅ Success!' >>
      - name: Update comment on success
        uses: peter-evans/create-or-update-comment@v2
          issue-number: $
          comment-id: $

Finally, we’ll log a message to the comment on an error, and close the issue unconditionally:

      - name: Update comment on failure
        if: failure()
        uses: peter-evans/create-or-update-comment@v2
          issue-number: $
          comment-id: $
          body: |
            ❌ Failed! Please check the log for the reason.
      - name: Close issue
        if: always()
        uses: peter-evans/close-issue@v2

And with all those steps, we can get ourselves a coffee from GitHub!

While you can’t access my private repository that I’m using to brew us coffee at home, you can definitely try out the example repo that I’ve set up here which uses the command-line interface’s simulator and runs on GitHub’s action runners instead:

To recap, we:

Follow me on Mastadon for more updates on this adventure!

Read full post

This is part 2 of a three-part series covering the odyssey of getting a new coffeemaker, learning BTLE and how it works, reverse-engineering the Bluetooth interface and Android applications for the coffeemaker, writing a Rust-based CLI interface, and finally, hooking it all up to a GitHub actions bot that lets you brew a coffee just by filing an issue!

In part 1 we got our coffeemaker brewing using a sniffed command that we logged from the actual application, and then sent to the coffeemaker using a small Rust program. However, we don’t really understand the language we’re speaking yet, we’re just repeating the application-to-device babbling we’ve snooped.

Understanding the Packets

Now that we know that we can send a request, we want to understand what the format of the request looks like. The first thing we want to do is understand what a packet is. A packet is a chunk of data of a defined length, in contrast to a stream of data that continues indefinitely. Packets are used throughout most communication technologies and are a fundamental way of describing discrete communication messages.

When dealing with embedded devices, packets will almost always have a header, and sometimes a footer. The header and footer are called the framing of the packet, and they delimit it so we can identify exactly where it starts and stop.

Inside the header and footer might be things like start-of-packet, or end-of-packet markers, and a length for framing. There may also be additional metadata like a checksum to detect corruption.

Why is this framing important? Devices will often use framing to help recover from corruption. If you lose or corrupt a byte anywhere in the packet, you can often recover synchronization quickly by just restarting the packet parsing at the next byte that looks like a start byte.

Here’s a few packets we captured being sent from the coffeemaker to the application while asking it to brew a coffee, and then waiting for it to finish cleaning:

0d0575f0c4d5                             # Some sort of status request
0d1483f007010100410900be02030c001c0206dc # Brew a cappuccino
0d0883f00702062f41                       # Cancel brewing

d00783f0010064d9                         # Response to brew/stop request
d012750f02040100400700000000000000d621   # Status response
d012750f04050100400c030900000000001cf0   # Status response
d012750f000000000000036400000000009080   # Status response

What information can we glean from this? First of all, the first byte is always 0d or d0 (13 or 240 in decimal), suggesting this is a start-of-packet byte that varies depending on the direction of communication. That’s one byte probably identified!

+> 0d 0575f0c4d5
+> 0d 1483f007010100410900be02030c001c0206dc
+> 0d 0883f00702062f41
+> d0 0783f0010064d9
+> d0 12750f02040100400700000000000000d621
+> d0 12750f04050100400c030900000000001cf0
+> d0 12750f000000000000036400000000009080
+--------------------------------------- Start of packet (0x0d or 0xd0)

Next, the second byte of the packet seems to vary depending on the length of the packet, and it corresponds exactly with the change in packet size. This is highly likely to be a length, and from what we can see here in a couple of the packets we captured earlier, it would be the length of the packet not including the start-of-packet byte.

   v---5 bytes--v
0d 05 75 f0 c4 d5
   v------7 bytes-----v
d0 07 83 f0 01 00 64 d9
   v------------------18 bytes (0x12)------------------v
d0 12 75 0f 02 04 01 00 40 07 00 00 00 00 00 00 00 d6 21
^  ^
|  +------------------------------------------------------ Length of packet
+--------------------------------------------------------- Start of packet (0xd0)

We can’t glean much about the rest of the packet yet, but we’re getting some of the framing nailed down here. Time to pull out some more analysis tools.

There are three approaches we can use to understand the binary language of Delonghi’s ECAM machines:

  1. We can disassemble the firmware of the coffeemaker and understand what it expects and what it sends, or
  2. We can observe the application’s communication with the coffeemaker over a period of time, changing one or two things at a time and seeing what changes in the protocol, or
  3. We can disassemble the application that controls the coffeemaker and understand its inputs and outputs.

The firmware of the machine itself would be the ideal place for us to look, but according to some various coffeemaker-hacking forums, the controllers are PIC-based, and disassembling/dumping PIC firmware somewhat tricky.

In addition, a disadvantage to disassembling microcontroller firmware is that due to size constraints, it’s far less likely for text strings to have survived the compilation process to give us hints as to what’s going on. Finding leftover snippets of logging or “debug” print statements are gold for the reverse engineer, and we’d like to use that as a signpost to guide our future work.

Observing the application’s communication directly is definitely an option. This is inconvenient as we saw from the HCI snooping adventures earlier on, and we might not know how to perturb the system enough to fully understand most of the fields we receive.

The best option we’re left with is disassembling the application itself and looking for hints as to what it’s doing, hopefully for some symbols that give us names, or text strings that may give us context.

Disassembling the Delonghi APK

We’re going to disassemble the Delonghi APK to learn more about how we can automate our caffeine fix.

Android applications are shipped in APK (Android Package Kit) format, and we’re going to download a few historical versions of the APK from APK Pure, a site that archives older versions of shipped applications. Getting a few different versions is a good idea, as developers will sometimes forget to enable obfuscation in some versions. If we’re lucky enough to get a version of the application without obfuscation, we can get the internal names for constants and fields.

In the past, APK decompilation was somewhat tricky. As Android is Java-based, but doesn’t use Java’s bytecode directly, either you’d need to learn how to understand smali, or you’d use dex2jar to convert the app to a faux-Java JAR file and use standard Java analysis tools to reverse engineer it. Jadx is a new Java analysis tool, which is far easier to use and much more powerful than the older tools.

Let’s open the APK in Jadx. Once it has decompiled that app, the first thing we’ll notice is that there are package names here. This is great news and suggests that even if the application is obfuscated, it’s not obfuscated fully and we’ll be able to learn how it ticks.

When we dig into some of the classes, we see method and field names, showing us a pretty clear representation of the original source code. Even better news!

After decompiling, our first goal should be to conclusively identify the framing of the packets and answer the questions we raised earlier on. With some reading through the source, we can identify the source of one of the packets we saw being sent to the machine…

0d 05 75 f0 c4 d5

… as coming from here:

public static byte[] getByteMonitorMode(int i) {  
    String str = TAG;  
    DLog.m188e(str, "getByteMonitorMode  dataN" + i);  
    byte[] bArr = new byte[6];  
    bArr[0] = 0xd;                            // ** 0d
    bArr[1] = 5;                              // ** 05
    if (i == 0) {  
        bArr[2] = DATA_0_ANSWER_ID;  
    } else if (i == 1) {  
        bArr[2] = DATA_1_ANSWER_ID;  
    } else if (i == 2) {  
        bArr[2] = 117;                        // ** 75
    bArr[3] = 0xf0;                           // ** f0
    int checksum = checksum(bArr);  
    bArr[4] = (byte) ((checksum >> 8) & 255); // ** c4 
    bArr[5] = (byte) (checksum & 255);        // ** d5
    return bArr;  

We can see the canonical source for every byte in that packet above in this function. 0d is the start_of_packet header. 05 is the length, and it looks like it’s just hardcoded here since the packet is always the same length. We can also see that the last few digits are a checksum, and if we dig into the checksum function, how it is calculated:

public static int checksum(byte[] bArr) {  
    int i = 7439;  
    for (int i2 = 0; i2 < bArr.length - 2; i2++) {  
        int i3 = (((i << 8) | (i >>> 8)) & 65535) ^ (bArr[i2] & 255);  
        int i4 = i3 ^ ((i3 & 255) >> 4);  
        int i5 = i4 ^ ((i4 << 12) & 65535);  
        i = i5 ^ (((i5 & 255) << 5) & 65535);  
    return i & 65535;  

Great! checksum looks like it could be one of the CRC family of functions, but we don’t necessarily have to fully understand it yet if we have its implementation. We now have all the framing necessary to construct any packet:

d0 LE (data) C1 C2
^  ^         ^--^-- Our checksum bytes   
+  +--------------- Length of packet, minus start-of-packet
+------------------ Start of packet (0xd0 or 0x0d depending on direction)

Now we can start to guess at the meaning of the rest of the bytes. Byte 2 appears to be a command ID. Byte 3 is a constant, and scanning the rest of the file suggests that it’s always 0x0f or 0xf0, depending on the command. That leaves the remainder of the packet for the command payload, if it’s used for the command.

0d 05 75 f0 c4 d5
^  ^  ^  ^  ^--^-- Our checksum
|  |  |  +-------- Always 0xf0 or 0x0f
|  |  +----------- The command ID (0x75 = monitor mode 2)
|  +-------------- The packet length
+----------------- Start of packet (0x0d)

Since we have a single spot for calculating the packet checksum, and we know that every request requires a checksum to be calculated, we can figure out all the places in the app that create request packets to get a better idea of what we can ask the machine to do by looking for the callers of checksum:

This is very interesting! There’s a lot of commands here, and each of them has a reasonably-well-defined name that we can use to understand what its function might be. We’ll have to start working through them one-by-one. With a bit of work, we can assemble a table of these commands:

enum EcamRequestId {
    SetBtMode = 17,
    MonitorV0 = 96,
    MonitorV1 = 112,
    MonitorV2 = 117,
    BeverageDispensingMode = 131,
    AppControl = 132,
    ParameterRead = 149,
    ParameterWrite = 144,
    ParameterReadExt = 161,
    StatisticsRead = 162,
    Checksum = 163,
    ProfileNameRead = 164,
    ProfileNameWrite = 165,
    RecipeQuantityRead = 166,
    RecipePriorityRead = 168,
    ProfileSelection = 169,
    RecipeNameRead = 170,
    RecipeNameWrite = 171,
    SetFavoriteBeverages = 173,
    RecipeMinMaxSync = 176,
    PinSet = 177,
    BeanSystemSelect = 185,
    BeanSystemRead = 186,
    BeanSystemWrite = 187,
    PinRead = 210,
    SetTime = 226,

Some of these request IDs are guesses based on the surrounding code context, and some of them are defined in enumerations in the application source. It’s a pretty good start for us to get going on figuring out how to brew our own beverage from scratch.

Revisiting the brew command

From just the bytes sent across the connection it’s difficult to understand exactly how the application is creating the packet to brew a coffee. However, in the disassembly we find a function dispenseBeveragePacket that appears to construct the packet we saw before:

if (arrayList != null) {  
    Iterator<ParameterModel> it3 = arrayList.iterator();  
    loop1: while (true) {  
        i4 = 0;  
        while (it3.hasNext()) {  
            next =;  
            if (next.getId() < 23 || next.getId() == 28) {  
                if (bool.booleanValue() || i != 200 || next.getId() != 2) {  
                    i6 = i6 + 2 + i4;  
                    bArr[i6 + 6] = (byte) next.getId();  
                    if (Utils.isTwoBytesShort(next.getId())) {  
                        bArr[i6 + 7] = (byte) (next.getDefValue() >> 8);  
                        bArr[i6 + 8] = (byte) next.getDefValue();  
                        i4 = 1;  
        bArr[i6 + 7] = (byte) next.getDefValue();  
    i5 = i4;  

If we clean it up a bit to some Java pseudocode, it looks like this:

int index = 6;
for (ParameterModel param in params) {
    if (param.getId() < CLEAN_TYPE || param.getId() == ACCESSORIO) {
        array[index++] = param.getId();
        if (Utils.isTwoBytesShort(param.getId())) {
            array[index++] = param.getDefValue() >> 8;   // upper 8 bytes
            array[index++] = param.getDefValue() & 0xff; // 
        } else {
            array[index++] = param.getDefValue() & 0xff;

So, from this pseudocode we see that a beverage is constructed from a list of ingredients (which with some further investigation, we find in the disassembled source as IngredientsId below), and an associated one or two byte value (param.getDefValue() above). Digging through the source for which ingredients are one or two bytes doesn’t yield much fruit, but maybe we can understand what’s going on by investigating further.

Where do we go next? We find that there are two commands that, based on their name, seem to be related to the application UI used for brewing: RecipeQuantityRead and RecipeMinMaxSync.

Let’s try sending these commands to the machine! To do this, it’s time to re-visit our Rust code.

First, let’s create a function that will add the packet framing (header, length and checksum) to any payload we want to send:

pub fn checksum(buffer: &[u8]) -> [u8; 2] {
    let mut i: u16 = 7439;
    for x in buffer {
        let i3 = ((i << 8) | (i >> 8)) ^ (*x as u16);
        let i4 = i3 ^ ((i3 & 255) >> 4);
        let i5 = i4 ^ (i4 << 12);
        i = i5 ^ ((i5 & 255) << 5);

    [(i >> 8) as u8, (i & 0xff) as u8]

fn packetize(buffer: &[u8]) -> Vec<u8> {
    let mut out = [&[
        (buffer.len() + 3).try_into().expect("Packet too large"),
    ], buffer].concat();

async fn run_with_peripheral(peripheral: Peripheral, characteristic: Characteristic) -> Result<(), Box<dyn std::error::Error>> {
    let data = packetize(/* data */);
    peripheral.write(&characteristic, data, WriteType::WithoutResponse);

Now we can start sending some example packets and exploring the responses. Here’s two test packet’s we’ll send (in pseudo-code and byte form):

Packet                             Bytes
RecipeInfo(profile=1, beverage=7)  0d 07 a6f0 01 07 75c2
RecipeMinMaxInfo(beverage=7)       0d 06 b0f0 07 6af4

Let’s look at what comes back, in raw byte form (some spaces added to help the reader visualize the packet):

RecipeInfo(profile=1, beverage=7):       
d0 17 a6f0 01 07 
↳ 0100410900be02030c001b0419011c02
↳ a2cd

d0 2c b0f0 07
↳ 010014004100b409003c00be038402000305
↳ 18010101190101010c0000001c000200
↳ 1b000404
↳ d03c

The first part of the response packets appear to be the machine echoing back the input. This makes sense, as the application will need a way to match up responses to requests.

For the remainder of the packet, we have the advantage of the decompilation above. We know the list of ingredients from the IngredientsId enumeration in the decompiled source, and if we match up the type of beverages with the ingredients, it makes a lot of sense that it’s what we’re seeing here:

RecipeInfo response:

---+--- ---+--- --+-- --+-- --+-- --+-- --+--  
   |       |      |     |     |     |     ╰---- Accessorio=2
   |       |      |     |     |     |          
   |       |      |     |     |     ╰---------- Visible=1
   |       |      |     |     |                
   |       |      |     |     ╰---------------- IndexLength=4
   |       |      |     |                      
   |       |      |     ╰---------------------- Inversion=0
   |       |      |                            
   |       |      ╰---------------------------- Taste=3
   |       |                                   
   |       ╰----------------------------------- Milk=190
   ╰------------------------------------------- Coffee=65

RecipeMinMaxInfo response:

--------+-------- --------+-------- -----+-----  
        |                 |              ╰------- Taste: 0<=3<=5
        |                 |                      
        |                 ╰---------------------- Milk: 60<=190<=900
        ╰---------------------------------------- Coffee: 20<=65<=180
> 18·01·01·01•19·01·01·01•0c·00·00·00•1c·00·02·00•
  -----+----- -----+----- -----+----- -----+-----  
       |           |           |           ╰------- Accessorio: 0<=2<=0
       |           |           |                   
       |           |           ╰------------------- Inversion: 0<=0<=0
       |           |                               
       |           ╰------------------------------- Visible: 1<=1<=1
       ╰------------------------------------------- Programmable: 1<=1<=1
> 1b·00·04·04
       ╰------- IndexLength: 0<=4<=4

From reading the application source, these packets seem to represent the current settings for the beverage, and the minimum and maximum ranges for each of the parameters. We can also start to guess at what the lengths of each of the ingredients’ parameter values are: most are a single byte, but a handful seem to be reliably two bytes wide and they all seem to deal with liquids.

Let’s confirm our intuition here by looking at the recipes for a latte and hot water:

Latte recipe:

---+--- ---+--- --+-- --+-- --+-- --+-- --+--  
   |       |      |     |     |     |     ╰---- Accessorio=2
   |       |      |     |     |     |          
   |       |      |     |     |     ╰---------- Visible=1
   |       |      |     |     |                
   |       |      |     |     ╰---------------- IndexLength=4
   |       |      |     |                      
   |       |      |     ╰---------------------- Inversion=0
   |       |      |                            
   |       |      ╰---------------------------- Taste=3
   |       |                                   
   |       ╰----------------------------------- Milk=500
   ╰------------------------------------------- Coffee=60

Hot water recipe:

---+--- --+-- --+--  
   |      |     ╰---- Accessorio=1
   |      |          
   |      ╰---------- Visible=1
   ╰----------------- HotWater=250

We can see that the pattern continues here: liquid amounts are two bytes long, while everything else is a single byte.

If you’re curious about every recipe this machine can make, a gist is available with a dump of the recipes.

We can now put all the pieces together and build our own brewing command with a recipe that we’re going to write from scratch: a large cappuccino!

83 f0 07 01 01 00 78 09 01 77 02 04 02
^  ^  ^  ^  ^  ^  ^  ^  ^  ^  ^  ^  ^- Preparation mode = PREPARE
|  |  |  |  |  |  |  |  |  |  +--+---- Taste (02) = strong (value 4)
|  |  |  |  |  |  |  +--+--+---------- Milk (09) = 375
|  |  |  |  +--+--+------------------- Coffee (01) = 120
|  |  |  +---------------------------- Trigger=START
|  |  +------------------------------- Beverage=Cappuccino (value 7)
|  +---------------------------------- Always 0xf0
+------------------------------------- Beverage dispense request   

Let’s send that to the machine and see what happens:

Success! We have the most complicated and important part of communication with a coffeemaker working: brewing the coffee.

There’s a bit of work required to turn this into a full application, but you can find that pre-written in my longshot project on GitHub. You don’t need to have this coffeemaker, as it includes a simulator mode that will allow you to brew a virtual coffee and test out the packet parsing and generation code.

Continue reading… In part three of the series we’ll hook this up to GitHub Actions so that we can automate coffee brewing from the browser!

Further reading for this post

Follow me on Mastadon for more updates on this adventure!

Read full post

This is going to be a long journey in three parts that covers the odyssey of getting a new coffeemaker, learning BTLE and how it works, reverse-engineering the Bluetooth interface and Android applications for the coffeemaker, writing a Rust-based CLI interface, and finally, hooking it all up to a GitHub actions bot that lets you brew a coffee just by filing an issue!


I’ve always been pretty content with using whatever coffeemaker I had nearby. For the last two or three years I’ve been using an old 2-in-1 Breville YouBrew coffeemaker, with a grinder built-in. It was a workhorse and worked perfectly until this September. A few months ago the machine asked me to run the regular descaling process to deal with our hard water, and this is where our adventure starts.

For those of you without hard water, our water in Canada like that of Italy, tends to be hard due to the type of rocks that water passes through on the way into our water supply. Over time, the hard water minerals precipitate out and stick to the metallic pipes and heaters within the machine, causing the temperature of the brewing water to drop, and other quality problems. Regular descaling is recommended, and some modern machines feature special descaling modes and chemicals that make this easier to do.

Figure: Major and Trace Elements in Tap Water from Italy January 2012 Journal of Geochemical Exploration 112:54-75 DOI:10.1016/j.gexplo.2011.07.009

To descale a machine, you generally use a weak acid like vinegar, or a more complex descaling agent to dissolve the precipitate as salts so they can be flushed out. Unfortunately, the last time I ran the descaling process on this coffeemaker the acid I was using seems to have degraded one of the internal seals and the machine began to leak significantly.

Faced with the prospect of opening it up and actually figuring out what part was leaking, I was finally convinced that given the amount of coffee we drink, it was time for us to invest in a more modern and little more upscale coffeemaker.

Fortunately a few months earlier over the same Spring, my partner had been TA’ing a course at a building in a nearby hospital over the summer and came home raving about a machine they had, that let her brew coffee from an application: the Delonghi Dinamica Plus. We talked about how cool it was to brew from an app and how it could make so many different drinks, but we both forgot about it for a few months until our the old machine failed.

Our coffee situation was pretty dire and we decided to pull the trigger on a Delonghi Dinamica Plus, despite the eye-watering price. We waited anxiously for a week, keeping the old coffeemaker in service by putting it inside a tray to catch the leaks.

When the new coffeemaker arrived we set it up and – to avoid a major digression – the coffee it made was excellent. Tasty espresso, perfect americano, creamy cappuccino. There was one major problem: the application you’re supposed to use for the coffeemaker doesn’t reliably connect and stay connected to the machine.

The Dinamica Plus is about $100 more than the Dinamica, and this cost is effectively for the privilege of brewing coffee from your phone (along with some other goodies, like defining your favourite or custom drinks from the couch). The application is difficult to use and a bit buggy, however. It will often fail to find the coffeemaker. Once you’ve connected, there’s no guarantee that it’ll allow you to connect again without wiping all the saved data from the application. It’s also integrated with some sort of online service that returns a 404.

I found myself at a crossroads here: Do I accept that the extra features that I paid for will go to waste, or do I dig in and see if there’s some way I can get this feature to work in a way that I can actually use it. There’s a lot you can learn by being forced to dig into something new, and it looked like this might be an opportunity to understand a bit more about Bluetooth. So, as you can probably guess from the remaining length on this topic, I took the latter path.

BTLE Background and Traffic Sniffing

The first question we need to answer is how we’re going to talk to this thing. We know it’s Bluetooth from the application’s insistence that Bluetooth be turned on. But it’s not showing up on my MacBook’s Bluetooth devices list, which means it’s somehow different than the common Bluetooth “Classic” audio and phone devices that show up there.

There may be other reasons why the coffeemaker doesn’t show up in a list of Bluetooth devices on a laptop, but the most likely candidate for something that isn’t Bluetooth Classic like those devices is Bluetooth Low-Energy, or more commonly known as BTLE.

One of BTLE’s major fame was its use in iBeacon for the Apple ecosystem: a way of transmitting that a physical location was “interesting” in some way to some set of applications. This is done by “advertising” Apple’s Bluetooth SIG identifier, along with an ID that specifies you’re talking iBeacon language, and some metadata allowing you to identify two 16-bit numbers of data along with that.

BTLE devices are often hidden from general device users, and you access them through specific programs and applications. For example, if you’re looking to rent a scooter, the app might communicate directly with the scooter over BTLE to unlock it for use directly from the phone – no cellular radio required!

We can scan for BTLE devices using an app like nRF Connect that will show us all the nearby devices to our mobile phone. In the screenshot below you’ll see everything that speaks BTLE that my phone can see:

There’s no obvious coffeemaker here but we can start to make some educated guesses by using signal strength. We’ll walk from the couch to the coffeemaker and see that one of our BTLE devices shows an increasing signal level.

We can confirm that this is the right device by forcing a connection to it and seeing the same Bluetooth icon appear on the screen that also appears when the application connects.

Aha! So now we’ve proved this coffeemaker is communicating over BTLE. But let’s digress and dig into what BTLE actually is, so we can figure out how to talk to it.

At a high level, BTLE works around the concept of a Service and a Characteristic, both identified by UUIDs. A Service is a container for Characteristics and, at a high level, two services on two different devices sharing the same UUID will generally implement the same “protocol”. BTLE stacks on devices will generally allow you to scan for announcements of devices advertising a Service UUID you are interested in, allowing you to take inventory of supported devices relatively easily.

The Characteristic is an endpoint on the Service and has some directionality information embedded within it. A Characteristic at a high level provides READ and/or WRITE operations to communicate with the device, but also which of the devices in the connection can be the origin of this data (ie: does the device broadcast packets? will it send them unannounced? does the device only communicate if you send it something first?). Table 2 in this tutorial lists the options if you’re interested in more detail.

We’ve learned some important information along the way. From our scan earlier, we know the two bits we need to communicate with the device, ie: the Service UUID is 00035b03-58e6-07dd-021a-08123a000300 and the Characteristic UUID is 00035b03-58e6-07dd-021a-08123a000301.

What we need now is something to send to it. I’m not brave enough to brute force packets for an expensive coffeemaker, so let’s try to capture some real communication between the application and the device.

Android has something called the Bluetooth HCI snoop log that, as you can imagine, allows you to snoop on Bluetooth communication. HCI stands for “host-controller interface”, and we’ll use the snooper to log interaction between the phone and the coffeemaker. The process of enabling and retrieving HCI logs varies wildly between device models, so your mileage may vary.

We can tap the button in the app to brew a cappuccino and, from the logs, see what the application sends to the machine. We don’t have any context as to what these packets are, but we know that sending this will brew a cappuccino with the default parameters (which according to the application is 65ml of coffee, 19 seconds of milk, and “medium” aroma):

Brew a cappuccino:
0d 14 83 f0 07 01 01 00 41 09 00 be 02 03 0c 00 1c 02 06 dc

Device sends back:
d0 07 83 f0 01 00 64 d9

Cancel brewing:
0d 08 83 f0 07 02 06 2f 41

Device sends back:
d0 07 83 f0 01 00 64 d9

Communicating in Rust

Now we’re going to need to talk to the coffeemaker ourselves. Each platform has its own libraries for talking to Bluetooth devices (CoreBluetooth on OSX, bluez-over-DBUS for Linux, etc.) Because of the complexity of dealing with cross-platform Bluetooth, most platforms like node.js, Rust, and Python, provide libraries that abstract this away.

My first attempts to talk to the coffeemaker were using node.js. I tried noble, and bleno, but neither appeared to work for me on my Mac. I’ve been wanting to get my Rust skills back into shape, so I decided to explore the btleplug library which is being actively maintained on all modern platforms.

To connect to the device we’re interested in, first we need to start a scan to find devices that are advertising the Service and Characteristic we are interested in. The following code fragment will ask btleplug to scan for peripherals on every adapter connected to the system.

async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let manager = Manager::new().await?;
    let filter = ScanFilter {
        services: vec![SERVICE_UUID],

    eprintln!("Looking for coffeemakers...");
    for adapter in manager.adapters().await? {
        for peripheral in adapter.peripherals().await? {
            eprintln!("Found peripheral");
            for service in {
                for characteristic in service.characteristics {
                    if service.uuid == SERVICE_UUID && characteristic.uuid == CHARACTERISTIC_UUID {
                        run_with_peripheral(peripheral.clone(), characteristic).await?;


async fn run_with_peripheral(
    peripheral: Peripheral,
    characteristic: Characteristic,
) -> Result<(), Box<dyn std::error::Error>> {
    eprintln!("{:?}", characteristic);

If we run this code it, prints:

Looking for coffeemakers...
Found peripheral
Characteristic { uuid: 00035b03-58e6-07dd-021a-08123a000301, service_uuid: 00035b03-58e6-07dd-021a-08123a000300, properties: READ | WRITE | INDICATE }

Looks good! We found the coffeemaker. And now we can try to send a raw packet to it that we had captured earlier that we believed was sent to brew cappuccino. Let’s change run_with_peripheral to this:

async fn run_with_peripheral(peripheral: Peripheral, characteristic: Characteristic) -> Result<(), Box<dyn std::error::Error>> {
    let data = &[0x0d,0x14,0x83,0xf0,0x07,0x01,0x01,0x00,
    peripheral.write(&characteristic, data, btleplug::api::WriteType::WithoutResponse);

And hey, that started brewing a cappuccino!

Let’s take stock of where we are:

  • We know how to communicate with the device
  • We know the UUIDs of the endpoint that we’re going to communicate with
  • We can write a packet to the device and see that it performs an action

This might be enough to stop here. We could trace all the packets to brew all the coffee recipes we want, but that doesn’t seem like a complete solution.

Continue reading… In part two of the series we’ll reverse engineer the protocol itself!

Further reading for this post

Follow me on Mastadon for more updates on this adventure!

Read full post