If you’ve spent much time in online gaming or IT related spaces, you’ve likely seen people refer to optimisation. Usually in the context of something being poorly optimised, or how software is not optimised these days like it used to be. What does this actually mean though? Is it true?

What is optimisation?

Optimisation as a term in this context, is quite heavily misused. The word itself just means to make the best use of available resources, but in the context of computing it’s a little more complicated. When it comes to computer software, there are many different resources available for a program to use. There are CPU resources, memory, storage space, network bandwidth, etc. There are also many different concepts in computer science that can fit different situations. Different algorithms, data structures, programming paradigms, etc. So, to bring in the definition of the word, to optimise something in this context means to design the software based on the resource constraints that it has, using the computer science concepts that best fit the task.

Trade-offs

A piece of software with a very large amount of memory available, that’s billed for CPU usage, might optimise heavily for CPU time. This could mean that it stores everything it calculates in memory to avoid needing to calculate it again, to save CPU time. In the inverse situation, it could have unlimited CPU time but instead be billed by memory allocation. In that case it’d likely do the opposite and optimise for minimal memory, repeating work where necessary to avoid having to store it. The first example might take 100GB of memory and run in 1 minute, and the second might take 12 hours to run while only using 128MB of memory. Both would be considered “optimised.”

There are also cases where you can approximate a feature, lower simulation frequency or graphical fidelity, or make some other functionality trade-off to improve performance while only partially impacting the functionality of the software. This is one of the areas that people point to the most for optimisation. While this can sometimes be done, there will always be feasibility limits. At some point, you won’t be able to reduce resource usage without preventing the feature from working in the way that it needs to.

Sometimes you can actually find ways to tweak the feature to run a lot better and also work better for its original purpose too. I’ve done a talk on this in the past, using the Minecraft mod WorldEdit as a case study. This is generally done by considering how a feature can be implemented during the actual design phase, rather than afterwards.

Average resource availability

Not all software is designed to only run under a single environment though, some software is designed to run on a user’s own computer or other personal device. This means that software developers need to develop for an average case device, or alternatively for the worst possible device that they want to support.

This could mean for example, writing a multiplayer game that accounts for poor network conditions. The methods used to improve stability of gameplay in poor network conditions will take up more resources elsewhere and overall be less performant than if the game was only written for absolutely perfect internet connections. The issue there would be that while the game would run a lot better for those people, the game would run significantly worse for anyone with any level of internet instability. As someone from Australia, this is something I am very familiar with 😌. In this case though, while it’s technically running slower for those with a perfect connection, the game is actually more optimised. This is because they’re optimising the game for more cohesive gameplay even when your internet might not be amazing.

How is the term misused?

The way it’s often used online is usually to mean “runs well,” rather than making the best use of available resources. The main issue that leads to then, is interpreting optimisation as a binary state. Something is either optimised or it isn’t. If it’s not running at the performance you’re expecting, it’s “not optimised.”

Software takes time to run, everything has a trade-off. Sometimes a feature is too important to what a piece of software does, and any approximations or reductions in functionality to improve performance are infeasible. Computers are astronomically more powerful than they were three decades ago, but they still aren’t magical. They have limits, and people now have extremely high expectations for what software actually does. If it isn’t instant, it doesn’t mean it’s not optimised.

So is modern software “less optimised”?

So back to the elephant in the room, is modern software actually “less optimised” than software from back when every byte of memory used mattered? In general, yes. But this doesn’t mean what you might think.

When dealing with extremely resource starved hardware, significant thought and care needs to go into every aspect of the program. Back in university I had an assignment to write a snake game for a microcontroller, and spent significant time designing it in a way where it would actually have enough memory available to fill the entire screen with the snake, which is usually considered the win condition. While we were taught to spend time to optimise for memory usage, doing this required significantly more thought than what was actually expected of us.

My point is, writing software for resource-starved systems takes a long time. While theoretically software engineers could spend ten to twenty times as long on each feature, and write it like software was written back then, this would significantly increase the cost and complexity of software. It would take ten/twenty times as long for the same software to be written and therefore cost that much more to develop as well. Code written like that is also generally significantly less readable, making it also more complicated and expensive to maintain or modify in the future.

Modern software is optimised, that’s an inherent part of software development at most companies. It’s something that software engineers are still taught to do. However, it doesn’t make sense for us to optimise for resource-starved systems at the expense of feature development, which is what users really want.

While some software companies will allow less time for feature development, and therefore less time for optimisation, most do actually care about this. It’s a well-known fact in the web development industry for example, that slow site loads lead to fewer sales and users. Most software engineers and companies do care, they just understand that people want rapid feature development and likely wouldn’t tolerate the prolonged development time to optimise to the same extent as 30 years ago.

Conclusion

Optimisation is a significant passion area for me. I care deeply about it, and I’m very interested in it. It’s a large part of what I research, and what I do at work. The way optimisation is spoken about online though, is always a massive pet peeve of mine. The tendency to decree “unoptimised!” or “lazy devs!” when what they actually mean is “this runs slower than I expected it to.” Optimisation is not a binary state where it’s either optimised or it isn’t, it’s a part of the implementation design process where the developer determines how to best write code to fill the need with the given resource requirements.

About the Author
Maddy Miller

Hi, I'm Maddy Miller, a Senior Software Engineer at Clipchamp at Microsoft. In my spare time I love writing articles, and I also develop the Minecraft mods WorldEdit, WorldGuard, and CraftBook. My opinions are my own and do not represent those of my employer in any capacity. Find out more.