by Philip Lew, XBOSoft CEO
I had the opportunity to attend the international software testing conference SofTec Asia 2017 in Kuala Lumpur, Malaysia, as both a speaker and a conference delegate today. As always, I try to meet lots of folks and listen attentively as I tend to learn more when I’m not talking! Here is a short summary of today:
Break Down Tests to Determine Function
Carol Dekkers spoke on The Top 10 Uses for Function Points in Mature Software Organizations. In her talk, she mentioned many Capers Jones metrics and models of how important function points are as common means to determine the size of a software’s functionality. These can then be the basis for estimating time to test, as well as other things.
The thing that really hit home for me was Dekkers’ view of the breakdown of test cases between input, output, and inquiry (amongst many other categories). Basically, if you are writing or executing test cases, or reviewing other’s test cases, it provided some interesting insight as to whether not you can determine if test cases are complete.
For example: If you review test cases and find there are 100 test cases related to input, and 10 for output, you inherently know that your output test cases are inadequate. She put up a pie chart of the average percentages based on projects in their database.
The other message I got was the importance of consistency in any measurement you do; not only consistency in measurement methods, but also consistency in context.
That’s hard to do. To me, it seems very difficult to “normalize” different contexts. But being that it is so hard, why not just take a best guess? So many assumptions and work go into an estimate, why not do it the agile way and make a best guess based on what information you have?
You can never replace experience and past data in helping to provide good basis for estimates. The important thing is to collect the data and determine how far off you were, and be able to use that for future estimates.
AI Is Changing Software Testing
Tariq King gave a very insightful talk on Artificial Intelligence in software testing. In doing so, he did a great review of some of the software testing tools and companies that are offering what they call, AI-based Software Testing.
Many of the companies he researched didn’t meet expectations and only used AI and machine learning as buzzwords to promote their product or service, without really delivering. While some did use AI in automating the manual part of test automation (creating automated scripts is MANUAL), the message was that software testing will change.
The bots are not coming, they are here. The question is how intelligent can they become, what will they do, and what will we, as software testers do?
For me, the big takeaway was that domain agnostic testing services can be automated by bots but it will be hard to automate those testing services that require on the spot thinking and adaptation, especially with expertise that only can be held in a human head, that which is germane to the application, i.e. accounting, EHR, etc.
What About Quality As A Service?
In the panel session on Testing As A Service, experts discussed their views on the importance of Testing, and in treating it as a needed Service. The big takeaway (for me) on that session was that: Why not Quality As A Service? QAAS? Why does testing have to be something that is by a separate department?
Importance of IOT Security Testing
I was excited to see Jon Hagar’s talk on IOT Security Testing, especially after he and I discussed IOT testing in a webinar in April. The big thing for me is that IOT security is a big deal. Those things that we thought were big deals, or ignored because we thought it would never happen, have happened.
Being Agile, Not Just Doing Agile
For me, my hope is that everyone learned about more than just agile testing and the methodologies behind it. In my presentation of the 7 Habits, I often referred back to The Seven Habits, by Stephen Covey, and how we need to be agile, rather than just do Agile.
Doing Right by the Client
Graham Bath gave a talk on Transformation of Test Organizations, The Rise of Testing as a Service. He discussed the big challenges in test organizations including TPI, Test Automation and Test Management. And also talked about putting together testing service catalog with service packages that work and go together when working with your clients.
Graham, coming from a testing company point of view much like XBOSoft, provided some insights in various business models and contractual items that can be both beneficial to the testing provider and the client.
Understanding and Defining
The final talk of the day, by John Fodeh on Quality in the Digital Age, focused on the impact of digital technologies on software testing. One of the things I remember was the speed of change. He talked about the Gartner Hype Cycle, and how from one year to the next, there were things that appeared new in the “Hype” portion (or in the productive portion) that did not appear in previous versions of the hype cycle (in earlier portions of the cycle). Meaning that technologies had leaped through to later parts of the cycle.
The effect of all this: As testers, we must get accustomed to such rapid innovation both in the stuff we are required to test, and also the environments and our own tools and methods. Fodeh also discussed the definition of software quality, something close to my heart.
Social networking allows people to discuss what our software does and how they like or dislike it, but what is powerful are tools that allow us to listen to and interpret what users are saying to understand their view of the quality of our software. This could be more important than how many defects make it to production, although probably but not necessarily correlated.
More in Store
Tomorrow I’ll be giving my keynote on What Does Good Service Mean? — a great subject, given the theme of the conference, Testing As A Service. I’ll be interacting with the audience as usual. I’m wondering how they’ll define Good Service?