We will begin with something lighter to eat and drink at 17.30
Presentation starts at 18.00
Performance Testing has long been conducted as a single experiment against a fully complete and assembled system. Architecture, software, hardware, networks, data states, and workload models are all combined to try and create the most “accurate” simulation possible to produce test results predictive of production experience.
Performance testing with this approach can be helpful for validating a completely assembled system, but in the new world of (a/A)gile development contexts, testing at the very end is unhelpful for providing timely feedback. To be more useful, performance testing can be adapted to the component levels and iteration intervals delivery teams are often working with.
We will discuss approaches and techniques for providing performance feedback earlier, more specifically, and more often.
- Strategies for designing, conducting, and tracking frequently repeatable performance tests.
- Techniques for testing individual components and incomplete systems.
- Suggestions for blending performance metrics into continuous integration.
- Other ways to provide performance feedback throughout a project.
Senior Engineer, Mentora Group
Mountain View, California
I have worked in testing for 15 years, and specialized in performance and reliability testing for 12. I work for Mentora Group, a national testing consultancy, from my home in Mountain View, California.
I test in a wide variety of contexts, using tools appropriate to the job at hand. Some of the applications I've tested recently include Oracle E-Business ERP systems, a hospital's provider portal, a large scale "second-screen" mobile app, a large B2B EDI Translation clearing house, a custom SaaS application, and e-Commerce web sites.
I am an organizer for WOPR, the Workshop on Performance and Reliability. I've presented at STPCon, and facilitated at CAST and STiFS (Software Testing in Financial Services). I am very proud to have completed BBST Foundations - the only testing certification I hold.
In my free time, I spend time with my family, read, and try to enjoy life as much as possible. I'm a comedy nerd, I seek out street food, play video games, and follow professional basketball.
I am passionate about treating people like people - it makes them happy and more productive, and it is also moral. Everyone has value, and they are not interchangeable. Reducing people to "resources", "FTEs", "positions" or any other bloodless euphemism that denies their humanity is morally wrong. Thinking about my work this way led me to strongly identify with context-driven principles.
I think the ideas, approaches, and techniques that we explore in testing have larger applications, and are something the world desperately needs. Critical thinking, continual questioning, collaboration, debate, and courage are badly needed in a world so utterly inundated with bullshit. As we careen into a future run on algorithms, we are needed to help identify and assess risks, call out dangers, and aggressively represent our constituency - people.