Three Changes You Should Always Make Before Checking In Any Code

Photo by Maico Amorim

This story originally appeared in Hacker Noon on March 11, 2017.

A code refactor always leaves me with a feeling of accomplishment. Although major refactorings are the most satisfying, every project doesn't always warrant them. Here are 3 quick and easy refactorings that I make to all of my projects in order to improve code readability:

1. Clean up formatting

The overall format of your code is what makes it possible to quickly navigate to areas of interest. Consistent indentation, line breaks, and patterns help programmers skim large chunks of code. Take the following sloppily formatted code for example:

Inventory inventory = new Inventory();
for (int i = 0; i < cars.Count; i++){
    inventory.Cars.Add(cars[i]);

    var owner = owners.Where(x => x.VIN == cars[i].VIN).OrderByDescending(x => x.PurchaseDate).FirstOrDefault();

inventory.PreviousOwners.Add(new Owner { VIN = cars[i].VIN,
                                        Name = owner.Name});
}

and compare it to this:

Inventory inventory = new Inventory();

for (int i = 0; i < cars.Count; i++)
{
    inventory.Cars.Add(cars[i]);

    var owner = owners.Where(x => x.VIN == cars[i].VIN)
                        .OrderByDescending(x => x.PurchaseDate)
                        .FirstOrDefault();

inventory.PreviousOwners.Add(new Owner 
                                        {
                                        VIN = cars[i].VIN,
                                        Name = owner.Name
                                        });
}

The second example above consistently indents lines, adds new lines, and follows consistent coding patterns. This makes it easy to skim the code quickly.

Books have chapters, headings, and paragraphs defined by formatting that make it easy to find what is needed at a glance — formatting code makes it possible to find things easily too.

2. Rename classes, methods, and variables

Classes, methods, and variables should be named in such that they help the programmer understand what is happening in the code. The shorter the scope of an object the more permissible it is to use shorter names (eg. "i" as a counter in a loop that's only a line or two long).

It's easy to use uninformative names when writing a "first draft" of your program, but using the first name that comes to mind isn't always the best choice. Take a look at the following example:

public IEnumerable<string> GetData(int year)
{
    var result = CallApi("/Cars", year);

IEnumerable<string> output = new IEnumerable<string>();

foreach(var c in result)
    {
        output.Add(c.Make);
    }
}

versus:

public IEnumerable<string> RetrieveCarMakes(int year)
{
    var inventory = CallApi("/Cars", year);

IEnumerable<string> carMakes = new IEnumerable<string>();

    foreach(var car in inventory)
    {
        carMakes.Add(car.Make);
    }
}

Using names that make sense make it much easier for someone else (or your future-self) to figure out what your code is doing.

3. Break up long expressions

When you get into a code writing groove it's easy to keep chaining commands together or using single-line syntax to speed up your writing. Often times, I look back on my code a day later and I see long expressions similar to this. Take a look at this two line expression:

bool hasHighSaleProbability = (daysOnLot < 60) ? true : (color == "Red" ? true : false);

var highSaleProbabilityVehicles = Inventory.Where(x => x.DaysOnLot < 60 or x.Color == "Red").Select(x => new { Make = x.Make, Model = x.Model, Year = x.Year }).Distinct().Select(x => new RecentInventoryView { YearDropdown = new SelectListItem { Text = x.Year, Value = x.Year }, MakeModelDropdown = new SelectListItem { Text = x.Make + " " + x.Model, Value = x.Make + " " + x.Model } });

Compared against this expression that has been broken out across multiple lines:

bool hasHighSaleProbability = false;

if (daysOnLot < 60 || color == "Red")
{
    hasHighSaleProbability = true;
}

var distinctMakesModelsYears = Inventory
    .Where(x => x.DaysOnLot < 60 or x.Color == "Red")
    .Select(x => new 
    { 
    Make = x.Make, 
    Model = x.Model, 
    Year = x.Year 
    })
    .Distinct()
    .ToList();

var recentInventoryView = distinctMakesModelsYears
    .Select(x => new RecentInventoryView 
    { 
        YearDropdown = new SelectListItem 
        { 
        Text = x.Year, 
        Value = x.Year 
        }, 
        MakeModelDropdown = new SelectListItem 
        { 
        Text = x.Make + " " + x.Model, 
        Value = x.Make + " " + x.Model 
        } 
    });

Although the first code snippet is technically shorter and has fewer lines, it is nearly unreadable. The second snippet breaks out the the if logic and breaks up all of the chained methods into more logical variables. The result? Code logic that is much easier to follow.

How to write understandable code for your future self

4de42-19uzocgt2yeq3umchqu9rza

This story originally appeared in Hacker Noon on February 10, 2017.

I can't tell you the number of times the title of this post has crossed my mind as I dug through a piece of code that I hadn't touched in years.

At the time I wrote it, I probably thought my code was beautiful. An elegant masterpiece. It should have been printed, framed, and hung on a wall of The Programming Hall of Fame. As clever as I thought I may have been a few years ago, I rarely am able to read my old code without some serious time wasted debugging.

This problem plagued me regularly. I tried different techniques to try and make my code easier to understand.

First I tried adding comments to my code. Pretty easy to do, but not all that helpful.

When comments weren't cutting it, I tried to write self-documenting code instead: small, well-named classes and methods that described their limited functionality. This made the code more readable but I would still have questions.

I also tried documenting my code in a separate file. This had its benefits but didn't solve the problem entirely either.

Eventually I figured out what I needed to do: I needed to use all three of the above techniques to write truly beautiful and understandable code.

Comments

Comments in your code should document the why, not the how. When I first started programming, I would often write very unhelpful comments like this:

public class Class1
{
  public List<string> DoWork(List<string> a)
  {
    List<string> numbers = new List<string>();

    // Loop over data
    for (int i = 1; i < a.Count; i++)
    {
      int s = a[i].IndexOf(" ");
      string num = a[i].Substring(0,s);

      // Save data
      numbers.Add(num);
    }

    return numbers;
  }
  ...
}

"Loop over data"? "Save data"? These comments are beneficial to understanding the code. I can easily tell that I have a loop, and that I'm adding my data to a collection, why should I waste valuable screen real estate with unhelpful comments?

Instead of saying what or how, comments should explain why. A programmer will see the for loop and know that it's looping over some type of collection of Addresses. However, a programmer will not know why we are starting our counter with int i = 1 — this is where adding a comment can improve the understanding of the code:

// i = 1 because the view will never display the first address
for (int i = 1; i < Addresses.Count; i++) {...

Now, we know some of the business logic driving our app. We know we don't process the first address because it never gets outputted to our view. This comment answers the why behind skipping the first address, adding clarity to the code.

Additionally, we remove the // Save data comment completely since it adds no insightful value.

Self-Documentation

Comments alone won't make code easy to reinterpret however. Let's take at our method again with our improved comments:

public class Class1
{
  public List<string> DoWork(List<string> a)
  {
    List<string> numbers = new List<string>();

    // i = 1 because the view will never display the first address
    for (int i = 1; i < a.Count; i++)
    {
      int s = a[i].IndexOf(" ");
      string num = a[i].Substring(0,s);

      numbers.Add(num);
    }

    return numbers;
  }
  ...
}

What exactly is Class1? What kind of work isDoWork() doing? What about the use of int s? The names of the objects in our code don't aid in our understanding what this code is doing.

This is where the idea of self-documenting code comes in: instead of creating objects with arbitrary, non-informative names ("I swear I'll refactor this later"), we build descriptive objects. If I have a class, its name should give me a good idea about what its properties and methods could be. A method's name should be descriptive enough to tell me what I should expect as an output without having to dig into the details of what that method is doing. Variables should add additional illumination that make what and how type comments obsolete.

In our example, let's make our code self-documenting. First, this class is intended to help us clean address data. Let's call itAddressStandardizer. With that simple renaming we know that all of the properties and methods of this class should pertain to dealing with dirty address data and making it cleaner.

What about the method name List<string> DoWork(List<string> a)? Well , I can tell you that this method is trying to parse out the number portion of a street address. So let's change the method name and signature to something more informative, like List<string> ParseHouseNumbers(List<String> addresses). Now we can make an educated guess that this method accepts some address strings as an input and and it will return a list of parsed house numbers.

If we clean up some variable names, our code becomes much easier to read, like this:

public class AddressStandardizer
{
  public List<string> DoWork(List<string> addresses)
  {
    List<string> houseNumbers = new List<string>();

    // i = 1 because the view will never display the first address
    for (int i = 1; i < addresses.Count; i++)
    {
      int firstSpaceIndex = addresses[i].IndexOf(" ");
      string houseNumber = addresses[i].Substring(0,firstSpaceIndex);

      houseNumbers.Add(houseNumber);
    }

    return houseNumbers;
  }
  ...
}

Documentation

Our code is finally starting to shape up. We have comments explaining why we chose to do something and we refactored our code to have object names that are informative.

The code at this point is ok but not perfect. If we don't look at this code for a few years, we probably have enough information now to look at the code and figure out what it's doing with relative ease.

The big piece of information that we are still missing however is knowing why this code was written in the first place.

Often times, I get a question from a manager or analyst about why we decided to build the project in the first place. Or I'll get a request for information about how the logic in the program works. Without a proper documentation file, the best thing I can do is send the business user a copy of my code. Most of the time that isn't very helpful.

What would be helpful though is an explanation of what our program is doing at a high-level. This is the purpose of formal documentation.

The documentation for this section of code might look something like this:

…After retrieving our customer information from our vendor, the program processes the data and cleans it up to load into our reporting warehouse. Cleaning up the data means parsing the addresses into multiple columns including house number, street name, street suffix, city, state, and postal code…

Now, when a business user needs to know what your program is doing, you can easily send them the above documentation their way. The documentation also acts as a nice refresher for you, the programmer, when it comes time to revisit the code, as well as any future coworkers who will be new to the project.

Wrap up

All of these techniques are necessary to eliminate code headaches down the road. Learn from my experience — not doing all three may save a little bit of time in the short term but it will hurt at some point in the future. Once you get in the habit of writing all three kinds of documentation, it will become second nature and make your life (and the lives of your future-self!) much easier.

Comments in code should explain the why *not the how:*

  • The how should be explained by well named classes and methods
  • Separate documentation still needed for developers and non-developers alike. Think of business users who need to know how your process works and the business logic that is in it; nice to have high-level document explaining the business uses of your process that someone non-technical can understand.

C#'s foreach ruined my afternoon

"Forest Fire" by CIFOR is licensed under CC BY-NC-ND 2.0

The other afternoon I ran into some nightmarish debugging with the following code:

private static void StartThreads()
{
    var values = new List<int>() { 1, 2, 3 };
    var threads = new List<Thread>();

    foreach (var value in values)
    {
        Thread t = new Thread(() => Run(value));
        threads.Add(t);
        t.Start();
    }

    // Wait for all threads to complete
    foreach (Thread t in threads)
        t.Join();
}

private static void Run(int value)
{
    Console.Write(value.ToString());
}

(I know, I know, I wish I could be using TPL but in this case I couldn't)

On my local machine, the code above ran and gave me my expected console output of 123 (your results may vary depending on what order the threads run in).

When I ran this code on my server however, the output was 333.

begin pulling out hair

Long story short, after a couple hours of investigation I figured out that the way a foreach loop works under the hood in C# ≥ 5.0, which is what I run on my local machine, works differently than a foreach loop in C# < 5.0, which is what I had on my server.

From the C# 4.0 spec, a foreach loop is really a while loop behind the scenes, meaning the code above really translates into something like this:

private static void StartThreads()
{
    var values = new List<int>() { 1, 2, 3 };
    var threads = new List<Thread>();
    IEnumerator<int> e = ((IEnumerable<int>)values).GetEnumerator();

    try
    {
        int v; // DECLARED OUTSIDE OF THE LOOP!!!
        while (e.MoveNext())
        {
            v = (int)(int)e.Current;
            Thread t = new Thread(() => Run(v));
            threads.Add(t);
            t.Start();
        }
    }
    finally
    {
        if (e != null) ((IDisposable)e).Dispose();
    }

    // Wait for all threads to complete
    foreach (Thread t in threads)
        t.Join();
}

The important thing to note in the above code is that int v gets declared outside of the while loop.

In the C# 5.0 spec, that int v gets declared inside the loop (causing it to get recreated with every iteration):

private static void StartThreads()
{
    var values = new List<int>() { 1, 2, 3 };
    var threads = new List<Thread>();
    IEnumerator<int> e = ((IEnumerable<int>)values).GetEnumerator();

    try
    {
        while (e.MoveNext())
        {
            int v; // C# 5.0 DECLARED INSIDE THE LOOP
            v = (int)(int)e.Current;
            Thread t = new Thread(() => Run(v));
            threads.Add(t);
            t.Start();
        }
    }
    finally
    {
        if (e != null) ((IDisposable)e).Dispose();
    }

    // Wait for all threads to complete
    foreach (Thread t in threads)
        t.Join();
}

Because my local machine and server were running different versions of .NET, the same exact code was producing totally different results.

Eventually I found Eric Lippert's article about the matter. Since I'm still fairly new to the world of .NET, I wasn't around for the big debate that took place in his comment's section regarding which should be the correct implementation. However, it is interesting to note that the C# devs decided to switch the logic on how the foreach loop operates so late in the game.

My eventual workaround for the .NET 3.5/C# 4.0 server was to assign the int to a newly created variable inside the foreach:

foreach (var value in values)
{
    var tempValue = value; // THE FIX
    Thread t = new Thread(() => Run(tempValue));
    threads.Add(t);
    t.Start();
}

As frustrating it may be to debug problems like this, it is nice to learn a little bit more of the language's history and idiosyncrasies.

XmlReader vs XmlDocument Performance

306d1-15myowyb7a3ye9kbyzjdttg

Recently I have been working on a project where I needed to parse XML files that were between 5mb and 20mb in size. Performance was critical for the project, so I wanted to make sure that I would parse these files as quickly as possible.

The two C# classes that I know of for parsing XML are XmlReader and XmlDocument. Based on my understanding of the two classes, XmlReader should perform faster in my scenario because it reads through an XML document only once, never storing more than the current node in memory. On the contrary, XmlDocument stores the whole XML file in memory which has some performance overhead.

Not knowing for certain which method I should use, I decided to write a quick performance test to measure the actual results of these two classes.

The Data

In my project, I knew what data I needed to extract from the XML up front so I decided to configure test in a way that mimics that requirement. If my project required me to run recursive logic in the XML document, needing a piece of information further down in the XML in order to know what pieces of information to pull earlier on from the XML, I would have set up an entirely different test.

For my test, I decided to use the Photography Stack Exchange user data dump as our sample file since it mimics the structure and file size of one my actual project's data. The Stack Exchange data dumps are great sample data sets because they involve real-world data and are released under a Creative Commons license.

The Test

The C# code for my test can be found in its entirety on GitHub.

In my test I created two methods to extract the same exact data from the XML; one of the methods used XmlReader and the other XmlDocument.

The first test uses XmlReader. The XmlReader object only stores a single node in memory at a time, so in order to read through the whole document we need to usewhile(reader.Read()) in order to loop all of the nodes. Inside of the loop, we check if each node is an element that we are looking for and if so then parse out the necessary data:

public static void XmlReaderTest(string filePath)
{
    // We create storage for ids of all of the rows from users where reputation == 1
    List<string> singleRepRowIds = new List<string>();

    using (XmlReader reader = XmlReader.Create(filePath))
    {
        while (reader.Read())
        {
            if (reader.IsStartElement())
            {
                if (reader.Name == "row" && reader.GetAttribute("Reputation") == "1")
                {
                    singleRepRowIds.Add(reader.GetAttribute("Id"));
                }
            }
        }
    }
}

On the other hand, the code for XmlDocument is much simpler: we load the whole XML file into memory and then write a LINQ query to find the elements of interest:

public static void XmlDocumentTest(string filePath)
{
    List<string> singleRepRowIds = new List<string>();

    XmlDocument doc = new XmlDocument();
    doc.Load(filePath);

    singleRepRowIds = doc.GetElementsByTagName("row").Cast<XmlNode>().Where(x => x.Attributes["Reputation"].InnerText == "1").Select(x => x.Attributes["Id"].InnerText).ToList();
}

After writing these two methods and confirming that they are returning the same exact results it was time to pit them against each other. I wrote a method to run each of my two XML parsing methods above 50 times and to take the average elapsed run time of each to eliminate any outlier data:

public static double RunPerformanceTest(string filePath, Action<string> performanceTestMethod)
{
    Stopwatch sw = new Stopwatch();

    int iterations = 50;
    double elapsedMilliseconds = 0;

    // Run the method 50 times to rule out any bias.
    for (var i = 0; i < iterations; i++)
    {
        sw.Restart();
        performanceTestMethod(filePath);
        sw.Stop();

        elapsedMilliseconds += sw.ElapsedMilliseconds;
    }

    // Calculate the average elapsed seconds per run
    double avergeSeconds = (elapsedMilliseconds / iterations) / 1000.0;

    return avergeSeconds;
}

Results and Conclusions

Cutting to the chase, XmlReader performed faster in my test:

Performance test results.

Now, is this ~.14 seconds of speed difference significant? In my case, it is, because I will be parsing many more elements and many more files dozens of times a day. After doing the math, I estimate I will save 45–60 seconds of parsing time for each set of XML files, which is huge in an almost-real-time system.

Would I have come to the same conclusion if blazing fast speed was not one of my requirements? No, I would probably go the XmlDocument route because the code is much cleaner and therefore easier to maintain.

And if my XML files were 50mb, 500mb, or 5gb in size? I would probably still use XmlReader at that point because trying to store 5gb of data in memory will not be pretty.

What about a scenario where I need to go backwards in my XML document — this might be a case where I would use XmlDocument because it is more convenient to go backwards and forwards with that class. However, a hybrid approach might be my best option if the data allows it: if I can use XmlReader to get through the bulk of my content quickly and then load just certain child trees of elements into XmlDocument for easier backwards/forwards traversal, then that would seem like an ideal scenario.

In short, XmlReader was faster than XmlDocumet for me in my scenario. The only way I could come to this conclusion though was by running some real world tests and measuring the performance data.

So should you use XmlReader or XmlDocument in your next project? The answer is it depends.

Coffee's Variety Problem

Freshly roasted coffee beans from central Mexico.

Spaghetti Sauce Origins

During the 1970s consumers had a limited number of spaghetti sauces that they could purchase at their local supermarket. Each store-bought sauce tasted the same, developed in test kitchens from the average flavor preferences of focus groups.

During this time, Prego brand spaghetti sauce was new to the market and was having a difficult time competing against the more established spaghetti sauce brands like Ragu. Prego tasted the same as all of the other brands so capturing loyal customers from other brands was a challenge.

Struggling to become competitive, Prego brought on board Howard Moskowitz. Today some consider Moskowitz the godfather of food-testing and market research, but during the 1970s Moskowitz's beliefs on product-design were contrary to the mainstream establishment. Moskowitz believed that using the average preferences of focus groups to develop a product lead to an average tasting spaghetti sauce: acceptable by most, loved by none.

Instead of giving focus groups similar tasting sauces, Moskowitz decided to test the extremes: have groups compare smooth pureed sauces with sauces that have chunks of vegetables in them, compare mild tasting sauces with ones with lots of spice, and so forth.

The results of this type of testing may seem obvious today, but at the time they were unheard of: people prefer different kinds of spaghetti sauce. It was better for a food company to have a portfolio of niche flavors rather than try to make one product that appealed to everybody.

Today spaghetti sauce no longer has a variety problem.

After all of this initial research Prego made Extra-Chunky spaghetti sauce, which instantly became a hit with all of the people who prefer a thicker type of tomato sauce. In the years since, spaghetti sauce manufactures have caught on to the technique and supermarket shelves today are lined with regular, thick and chunky, three-cheese, garlic, and many other types of sauces that appeal to a wide-array of consumer flavor preferences.

Coffee's Variety Problem

Coffee today has the same problem that spaghetti sauce did a quarter-century ago.

The average supermarket's coffee aisle may look like it has a wide variety of choices: whole beans versus ground coffee versus coffee pods, arabica versus robusta beans, caffeinated versus decaf, medium versus dark roast, etc…

However, once you take a look at what coffee a particular individual may drink — let's say whole bean, medium-roast caffeinated arabica beans -the amount of variety found at a super market is surprisingly small, maybe only 2 or 3 different types.

The limited selection for whole bean, medium roasted coffee at my local super market.

Additionally, it is nearly impossible to know how long the coffee has been sitting unsold on the shelves, it's difficult to find a variety of coffee from different geographic regions around the world, and only if you are extremely lucky can you find a coffee that has been lightly roasted.

Independent coffee shops, especially those that roast their own coffee, offer slightly better options in terms of variety, however they come at a steep price: it is not unusual to see these coffees selling for \$16-\$28 per pound.

I understand these small-batch roasters experience higher costs due to lack of scale, availability of beans, and a slowly developing market to third-wave premium coffee, however paying \$20 for a pound of coffee beans (or even worse, the standard 12oz bag) is not something I can justify doing regularly.

This is what caused me to set out on my quest to get premium quality coffee beans at an affordable price.

The Internet Cafe

While the internet offers advantages in buying premium roasted coffee beans, there are still issues with unknown freshness and high prices, especially when the cost of shipping is included.

Shipping costs and price per pound can be reduced when buying in bulk, but buying in bulk means I'll have roasted beans sitting around for a long time before being consumed, therefore affecting freshness and taste. Short of finding friends who want to split a large coffee order, buying roasted coffee beans online isn't a great option.

What is a great option is buying green, or un-roasted, coffee beans. The shelf life of green beans is up to one year if stored properly and green beans are significantly cheaper than roasted beans because there is less processing involved.

Green, un-roasted coffee beans.

Not only are green beans fresher and cheaper, there is significantly more variety available. Commercial roasters need to buy beans in large quantities in order to be able to sell to coffee shops and supermarkets. This means they are sourcing coffee beans from large commercial farms that are able to supply such a large amount of coffee beans.

If we buy green beans for personal consumption at home, the number of farms that we can buy from is hugely expanded since we only need to buy in small quantities. A farm that only produces a few hundred pounds of beans each year is now within our grasp since commercial roasters would never be able to purchase from them.

There are many retailers online that cater to the home roasting market. My favorite is http://sweetmarias.com and they always have a huge selection of premium beans from all over the world, many for under \$6/pound.

Home Roasting

Roasting beans at home used to be the norm in the early nineteenth century. Roasting coffee beans is similar to making popcorn and can be done over a fire, in a stove, or in the oven. While these methods work, they are messy and involved. Fortunately, cleaner and more scientific options exist.

Commercially available home roasters are one such option, however they cost several hundred to thousands of dollars — well out of my price range.

The most easily obtainable and easy to use home coffee roaster is an electric air-powered popcorn popper. These retail for between \$15–\$25 new. If you decide to try this route, get one without a mesh screen on the bottom.

Once you have an air popcorn popper, roasting coffee beans is as easy as pouring them in, turning on the heat, and waiting until they turn the brown color you are accustomed to seeing.

Although roasting beans can be as easy as turning on the popcorn popper and waiting until the beans reach a desired color, there is a scientific rabbit hole that roasting geeks like me eventually wander down…

Home Roasting 2.0: Web Roast

When I first started roasting coffee beans at home, I started with the air popper method. I soon wanted to start experimenting more with how to make the process more automated, as well as how I could become more precise and play with different variables in order to change the flavors of my roasted coffee.

There are some mods you can make to a regular air popcorn popper to give you more control in how your roast your beans, but ultimately I wanted more control.

I present to you, Web Roast:

The home-made, Internet of Things (IoT) coffee roaster.

The finished project with all code can be found on my project's GitHub page: https://github.com/bertwagner/Coffee-Roaster/

Essentially, this is still an air popper but with much more control. The key features include:

  • Air temperature probe
  • Ability to switch the fan and heat source on/off independently
  • Holding a constant temperature
  • Automatic graphing of roast profiles
  • Ability to run saved roast profiles

Now instead of standing at my kitchen counter turning the air popper on and off, holding a digital temperature probe, and shaking the whole roaster to circulate and cool the beans between "on" cycles, I can simply control all of these conditions from my iPhone.

Screen shot of the web app user interface.

From my phone I decide when to heat the beans, when to maintain a certain temperature, how quickly or slowly to hit certain roasting stages or "cracks", and logging to make sure I can reproduce results in future runs.

Beans develop different flavors based on how quickly or slowly they go through different phases of roasting. Some beans might be better suited for quick roasts that will maintain acidic fruit flavors. Other beans might need to be roasted more slowly to bring out nutty and cocoa flavors. The moisture content of a bean will also have an effect on roasting times, as well as beans that are sourced from different regions of the world.

Next Steps

Although I'm extremely satisfied with how my roaster has turned out, there's still a lot on the to-do list.

I'm currently adding functionality to save roast profiles, so after an initial run of desired results, reproducing those results for the same batch of green beans is easy.

In the future, I'd like to build a second, bigger drum-style roaster for being able to roast larger batches at a time.

Follow my Github coffee roaster project page to keep up with any future updates. Also I would love to hear from anyone who has built similar projects of their own.

Good luck and happy roasting!

Page 1 / 2 »