Monday was the Requirements Engineering Education and Training (REET) workshop that I co-chaired with Ljerka Beus-Dukic. I’m always nervous about how these things will go – there are so many unknowns with the environment, the presentations, and the timing. However, I’m absolutely thrilled with the end result! We had 6 papers presented and it varies from a normal conference in that we had big blocks of discussion to really dive into some of the commonly shared issues in this field.
We had a lengthy discussion mid-afternoon around how do you really teach anything about requirements to students in a semester long class (or less!) and where does requirements training fit in the overall teaching of the software lifecycle at a university level. The thought is that most people learning about requirements in industry aren’t brand new to software development, they already understand a bit about coding and verification, so they understand the value and context for requirements. Yet trying to teach it to a freshman class would leave the students thinking “why do I need to know this?”
Another key point that is important at all levels is repetition in the lessons they receive. It came up again in the context of academia, where they may teach it to freshmen, and then they need to re-teach it throughout in other classes. This isn’t specific to requirements certainly, just a basic principle of good training.
We also talked quite a bit about how you assess the students in and after training. This is a long standing BIG problem across academia and industry. One issue with assessing whether any behaviors changed in an academic environment is that you teach it and they may use it on a project, but then you never see the students again. One participant actually tries to talk to industry contacts who hire the students to see if they see better results from his students. In industry, it’s very hard to isolate whether results are attributed to training. We can look for modified behavior, but this requires a dedicated resource to do so. I’ll probably talk about this more in a day or so, since we have a panel related to this on Thursday!
We then ended the day with 3 different sets of training activities presented with the intent on getting any participant feedback on the activities, but also leaving the workshop participants with the opportunity to take the exercises back and reuse them. At the very end, we played one of our ArgonDigital training games: Bet on Yourself Pictionary – it was fun because I was working with a group of people who are already familiar with requirements models in some form and so they could play our game – which basically enlists 2 “students” to act as analysts drawing requirements models for a specified scenario, trying to get the rest of the class to elicit the requirements from them. Oh and they had to do all of this without speaking. It was great fun and if nothing else, a lively way to end the day!