I believe Computer Scientists have a moral duty to help prevent climate change. Some very simple steps could be taken to do this. I list them in this post.
Why is it a moral imperative for Computer Scientists
I believe those who have the ability to help also have a responsibility to help. Whilst this principle tends to cause me a lot of extra work, it’s one I truly believe in.
As Computer Scientists we uniquely work on behalf of others to implement their needs. By always implementing our work in a way to maximise efficiency, and convincing customers this is AGoodThing [tm] we can act as a force multiplier in the fight against climate change.
Computers are used for everything. Imagine the impact we could have if we halved computer processing power use? Hell, even if we only reduced it 10% that’d buy us some more time to fix the planet.
We’re uniquely placed throughout society. We have the ability to help everyone, everyday. We therefore have the responsibility to reduce carbon emissions.
Here’s how we can do it. This is not an exhaustive list.
Write efficient code
It’s easy to write code quickly rather than most efficiently. We’ve all done it. “Oh it’s only a small dataset”… that is evaluated every 5 seconds… Continuously for the 3 years of its life… on 1 million devices worldwide… This uses a small amount of extra processor time, repeatedly.
Everybody has done this many times. It all adds up. An extra 5% processing time her and there adds up to a drastic bloat over time. ‘Good enough’ should be removed from our lexicon.
At the end of last year we implemented a change in Herald Proximity API that reduced battery use on mobile phones by 30%. That code is now running on 7.5 million phones worldwide – but having the energy effect of only running on around 5 million devices. That’s a huge impact from a couple days fine tuning. This is why we add in energy use changes to EVERY single release.
We should all assume our code is going to be ran in a tight processing loop, on millions of devices, and design accordingly.
Take advantage of multi-threading
Running one core at 100% and not using the other cores is a bit of a sacrilege these days. Most of us have 6-core processors that are hyper-threaded – giving us a glorious 12 hardware threads to use. Most of our code uses 1.
Data scientists are particularly bad for this in python. The amount of analytical code I’ve seen that takes 24 hours to run but uses 1 of Boudica’s 28 hardware threads is incredible. (Boudica was a warrior Queen in England who fought the Romans. As my new machine is a super powerful desktop, it seemed appropriate.)
We need to make multi-threading the default if at all possible. Lets make it easy for people and design it into our languages, runtime environments, and Operating Systems to make it easier to consume. As API designers lets think about how our code is used and ran in real life rather than in testing.
Make better use of existing hardware
I’ve been working a lot on embedded wearable devices lately. There was one point where I thought “You know what, using a really old chip is folly, lets just get everyone to upgrade. It’s only an extra couple of dollars per chip”
I’m rally glad I didn’t make that my recommendation in the end. Turns out there are over 750k of the older chips already produced. That’s a lot of silicon and production carbon going to waste if we hadn’t supported the oldest chips.
Support 32-bit and Arm
Related to my previous point, most API designers concentrate on Windows and Linux on modern 64-bit Intel processors, leaving others’ to port their API to 32-bit or Arm platforms. I think this is a mistake. Arm uses less energy (for the most part… YMMV) and there’s a lot of 32-bit kit out there still.
The Haiku Operating System does an amazing job running on older hardware and is extremely responsive. I don’t believe there’s any excuse for not supporting older hardware. Imagine the resources – not to mention money – that could be saved if the average computer lasted 10 years instead of 3? It’s easy to do in reality, let’s not kid ourselves.
Use API that are already available
Don’t roll your own unless you have to. Not only will you expend considerable extra resources writing it, but the code needs hosting, CI/CD automated testing and deployment, and so on. So that all costs in Carbon.
Try and find an API that does 80% of what you need, then get involved in that project to help out. I’ve been doing this recently. I decided to use Zephyr RTOS as the base OS for an embedded project.
I’m having to write a few small device drivers, but the effort is much much less than trying to write a lot of low-level OS like code for a bespoke device. It also makes my API for useful as it can be used alongside existing code on a common platform.
Publish opensource software and hardware under permissive licenses
Related to the previous point, please oh please publish your code under a permissive opensource license. Permissive allows commercial and non-commercial uses. Instead of ‘One API for ‘pure’ opensource, and another for permissive’ lets just publish everything under a permissive license like Apache 2.0 by default. Apache 2.0 also provides Patent protection, reducing the risk to organisations of using this code.
Want to save the environment? Publish your code as Apache 2.0.
Me
Also for opensource hardware, CERN have done a fantastic job with the CERN Open Hardware Permissive license v2. We’re using this for herald Hardware designs. It’s like Apache 2.0 but also allows designs to use commercial and patented hardware chips in the design. Commercial companies can take our designs and modify them and even distribute them as their IP so long as the opensource parts are referenced.
I’m working with a commercial hardware manufacturer at the moment and encouraging them to use our opensource designs. Not only will it reduce their costs it’ll also reduce the extra testing required for our Herald API as we’ll have done it all ourselves already, reducing their commercial risk.
Publish Open Data
We’re pretty great at publishing code. GitHub has been an absolute godsend for increasing this trend and making code discoverable and enhancing collaboration.
I see a lot of projects that say “Go grab this dataset, and this code, and look what results you can get!” I always think “Dude, give me the results and I won’t have to run the full thing.” It saves time, but crucially also carbon.
We’ve done extensive testing lately on RSSI and distance conversion with phones. We took the time to write automated robots to perform this testing. Not only did we then use that data to improve our distance conversion algorithms, but we’ve published millions of datasets too.
This allows others to create their own algorithms that could be better – and not only reduces their testing time, but also reduces their energy consumption in rerunning the same tests.
Conclusion
The above are just my stream of consciousness musings on areas that we as Computer Scientists can help avoid a climate catastrophe. You can probably each think of half a dozen other things yourselves. So do them, every day, and know that not only are you writing good code, but you’re saving the world.
And also saving the Penguins, which is always a personal ambition! (I’m sure Linux Torvalds would approve too!)
We each have a responsibility to do what we can. You recycle, you use the minimum water at home and drive efficient cars. Now write efficient code – it could have a much bigger impact on the world than anything else you do as a Computer Scientist!
We have a moral obligation to do this. We’re the only ones that can ensure the code is efficient and that we’re not wasting hardware resources, so lets go and do this every day!
Go forth! Write efficient code! Save the environment and the world!
Me