Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Monday, March 13, 2017

Why Codethink is a founding member of the Civil Infrastructure Platform, a Linux Foundation initiative

This blogpost was originally published on the Codethink website on Thursday March 9th.

On April 4th 2016 a new Linux Foundation initiative called the Civil Infrastructure Platform was announced. CIP aims to share efforts around building a Linux-based commodity platform for industrial grade products that need to be maintained for anything between 25 and 50 years - in some cases even longer. Codethink is one of the founding members.

Industrial grade use cases


In order to describe why this initiative is relevant let me go over the use cases that motivate companies like Siemens, Toshiba, Hitachi, and Renesas to share efforts.
During the Open Source Leadership Summit, Noriaki Fukuyasu (Linux Foundation) and myself, based on the experience of Siemens, Hitachi and Toshiba, described the development life cycle in industrial grade use cases. For example, a railway management system is as follows:
  • Analysis + design + development: 3 - 6 years
  • Customizations and extensions: 2 - 4 years
  • The certification process and other authorizations take a year.
  • Each new release or update has to go through further certifications and authorizations that take between 3 and 6 months.
  • The system is expected to work for between 25 and 50 years.
So on average, an industrial grade product might take 5 to 7 years from conception to deployment. This is coherent with our experience in other industries like automotive, where life cycles are also quite long despite the expected lifetime being shorter.

A key part of the life cycle is maintenance. Due to its length, the associated risks are high. The certification processes to introduce significant changes in any already deployed systems are painful and expensive. In addition, the capacity to simulate a production environment is, in general, limited. This is true in other cases like energy production plans, for instance.

Open Source principles in the Civil Infrastructure industry


It’s obvious that Open Source could have a dramatic impact in this industry. By sharing efforts, corporations can commoditise a significant portion of the base system focusing on differentiation factors, increasing control through transparency and the quality of that starting point over time. Collaboration with upstream will bring even higher impact benefits.
Two immediate challenges come to mind when thinking about Open Source in this industry:
  • Development of processes and practices to produce software for safety critical environments.
  • Bridging the gap between the Open Source approach for software maintenance and the approach currently taken when building large-scale platform projects. For instance, how can approaches oriented to update any specific Open Source software component to the latest upstream stable version be compatible with any typical industry SDLC?

Can you reduce the gap?

We have for years been working on transformation projects for which one of the goals has been to reduce the gap between the software our customers ship and what upstream is continuously releasing. One of the key steps is to adapt an organisation’s processes using FOSS tools. Over the years we have been a strong advocate that the closer to upstream you are, the more benefits you reap from the Open Source development model, maintenance cost reductions being one of the main advantages.

So why did we get involved in an initiative that aims to maintain a kernel for 25 years then?


The short answer would be... because we love a challenge!

Safety critical with Linux-based systems is a challenge currently being faced in the automotive industry for instance, where Codethink is a strong player. When we analysed some of the industrial-grade use cases, it called our attention not just to the magnitude of the second challenge enumerated above, related with super long term maintenance, but also the apparent conflict between the industry requirements and the referred well known Open Source practices.

Hence the main driver for an Open Source consultancy like Codethink in participating in an initiative like CIP is to learn by doing, that is, putting the Open Source development, delivery and maintenance best practices under stress in one of the toughest environments. We bring our experience in producing embedded Linux based systems and our Open Source culture, to work together with industry leaders in finding solutions to these challenges, by looking at them with FOSS eyes.

Current activities

Codethink is participating in CIP in several capacities, the most relevant being:

Kernel maintenance
The first CIP approved kernel is 4.4, an LTS kernel supported until Feb 2018. Ben Hutchings is the initial CIP kernel maintainer. Besides providing support for the reference platforms, Ben is working on several activities like backporting the security patches, such as those from the KSPP and consolidating the maintenance policies, taking those from the kernel community as reference.

Testing tooling

kernelci.org is the most successful testing project in Open Source. Its impact in the kernel community is growing, as is the number of people and companies involved. It was designed and developed as a service where the testing activities can take place in distributed board farms (labs).

Codethink has been working on making the tools easy to deploy on developer machines through a VM, so they can test kernels on directly connected boards. This first milestone of the CIP testing project is called Board At Desk - Single Developer. This activity was described at the Open Source Leadership Summit 2017 and the first beta released during ELC 2017.

Conclusion

The challenges for Open Source that Industrial-grade product development and maintenance introduce are great, especially in two aspects: safety-critical and maintenance. Codethink is working on CIP to help the industry to overcome these challenges by adding our Open Source perspective.

Learn more about the CIP project by checking the following slides and videos from the conferences in which CIP members have participated.

Thursday, May 12, 2016

Testing => quality. Really?

Introduction


Nowadays the topic automated testing is becoming mainstream. Organizations and projects are investing significant effort in creating tests, using tools to automate them and plug them in their delivery chain. Combined with continuous integration tools, automate testing increases the usefulness significantly. I obviously find this trend unavoidable. Sooner or later every software organization will eventually go through it, if they have not already.

This movement is fairly new. Concepts like automate testing or continuous testing, in the context of continuous delivery, still do not have 10 years of history. We need to be careful with trends. The topic is so hot these days that the association between automated testing and quality is becoming the norm, also in Open Source.

Open Source became the winning "culture" in several industries more than five or ten years ago. Automated testing in the context of continuous delivery was not popular back then. Still, Open Source influence and adoption expanded also because of superior quality.
How come?

When I think about quality in Open Source, one key principle and three actions come to my mind.

 

Principle: transparency


Transparency is about seeing what others are doing, but also about understanding. This second part is too often forgotten.

Action 1: Code review


Transparent code review, (again, see & understand) is, in my opinion, the most powerful quality assurance measure a project or organization can apply. It is the fundamental action in what some call the FLOSS development model.

It has a side effect that I really like as manager: it improves younger developers skills. It also brings many other positive side effects.

 

Action 2: dogfooding


A few weeks ago in a workshop with a customer, Codethink CEO Paul Sherwood was explaining this point with an example that I stopped talking about several years ago. I found it so obvious that at some point I gave up fighting for it. After listening to him, not anymore. The example was.... your organization is developing Linux based products, use Linux, not Windows.

Simple, right?

Dogfooding is another of those actions that in long term Open Source projects is frequently taken for granted but that is not the norm in commercial environment. So many projects driven by newcomers to Open Source do not pay enough attention to it.

The impact over quality of dogfooding in the mid term is impossible to calculate. Still I believe is huge.

Action 3: delivery model that maximises the influence of early adopters


Who are early adopters? They are the developers or power users who like to consume experimental or pre-releases of your "product". The number of those willing to report bugs is significantly bigger in relative numbers than in consumers.

Increasing the number of early adopters, reducing the hurdles they face to use your software, analyse/debug problems and report should be a key activity among those projects worried about quality assurance. Adapting your delivery process to maximise their impact, not just have a positive effect in the use cases your software was designed for, but in others, expanding the knowledge about how your software will behave in the hands of users. Like it should happen between developers and delivery engineers, the feedback loop with early adopters should be very short, so you can provide them improved pre-releases in short cycles.

Open Source has reached the current point understanding how important the role that early adopters play is.

Personal note about this third topic

I want to make a point here before moving forward.

It seems to me that there is a new wave of Open Source projects, specially those driven by commercial organizations, that underestimate the mid term effect early adopters have on the quality of a project. I also see how the Continuous Delivery hupe, focused on the developers and delivery engineers, is leaving the early adopters behind in some cases. Specially in those Open Source projects in which the project is developed and delivered by full time dedicated engineers.

Many projects pay little attention to making their frequent releases truly installable, documented, simple to debug without complicated tools or even centralised infrastructure, bug trackers simple/fast to use, treat bug reports as a valuable asset . In summary, early adopters cannot follow the pace and, when they do, they need to spend a lot of energy to be valuable.

Let's go back to the main argument.

 

Conclusion


Code review, dogfooding and early adopters in transparent environments has been, I believe, the pillars that has made Open Source what it is today in terms of quality. And then, only then, automate testing, or continuous testing comes to place, in addition, not in substitution, not before, not in between.... in addition.

Are you doing Open Source? Don't take shortcuts. Surf the "trend wave" instead of embrace it blindly. Learn first, look carefully what sustainable projects are doing.

Quality is as much as culture as it is about having a nice dashboard full of green lights. Testing => Quality is, in general, a wrong association of ideas.

And yes, test frameworks, board farms executing thousands of tests, green lights in dashboards, etc. are awesome. Probably a forth pillar in the coming future.