Monday, February 14, 2011

Continuous Test Integration, 2

Following up with my first post on CTI, here is how I automated launching Selenium to achieve Test Automation from within Ant for a client.

This uses the Selenese flavor of defining Se tests. The Java (or other RC) clients are good options as well (especially with the pending 2.0 release), but we chose this because the HTML output reports were very readable by the Product Owner.

Here is the launching target for Ant, remember to replace "YOURSuite.html" with... well... your suite of Se tests.
<target name="run.selenium">
<!-- Manual stop, to make time for starting a debugger
<input message="press enter"/>
-->
<mkdir dir="${selenium.report.dir}"/>
<java jar="${selenium.server.jar}" fork="true" resultproperty="selenium.result">
<jvmarg value="-Dhttp.proxyPort=8888"/>

<arg value="-userExtensions"/>
<arg value="${test.classes.dir}/selenium/user-extensions.js"/>
<arg line="${firefox.selenium.profile.dir.arg}"/>

<!--
<arg value="-log"/>
<arg value="${selenium.report.dir}/selenium.pb.log"/>
-->
<arg value="-htmlSuite"/>
<arg value="*firefox"/>
<arg value="${tomcat.url}"/>
<arg value="${test.classes.dir}/selenium/YOURSuite.html"/>
<arg value="${selenium.report.dir}/YOURSuiteResults.html"/>
</java>
</target>
I hope this helps others get more out of their Continuous Test Integration efforts. Please use this code for your own purposes without any warranty.

Continuous Test Integration

Continuous Test Integration (CTI) is the combination of Continuous Integration (CI) and Test Automation. Here is a strategy I developed for a client to enable CTI on both the developer's machines and the Hudson CI build machine.

(update: added Selenium launching post to carry this one more step forward)

This approach uses Ant and Tomcat to automate the launching of servers. (If you can use it, I highly recommend Jetty over Tomcat for this purpose, much more direct API.) Here's one way that I've gotten Tomcat to work within Ant.

Here's the launching code, I'll explain some of it below:
<target name="start.tomcat" depends="init, stop.tomcat" description="Start the embedded tomcat server">
<!-- The parameterized server.xml file changes the Connector with port "8080" to port "${catalina.port}". -->
<!-- An alternative approach would be to perform an XSL transform on the server.xml file. -->
<copy file="${ivy.downloaded.lib}/tomcat/parameterized-server-6.0.26.1.xml" tofile="${tomcat.dir}/conf/server.xml"/>

<!-- Launch Tomcat -->
<echot message="Launching Tomcat Server"/>
<java classname="org.apache.catalina.startup.Bootstrap" dir="${tomcat.dir}" fork="true" spawn="true">
<jvmarg value="-Xmx256m"/>

<!-- Debugging support
<jvmarg value="-Xdebug"/>
<jvmarg value="-Xrunjdwp:transport=dt_socket,address=8123,server=y,suspend=n"/>
-->

<jvmarg value="-Djava.util.logging.config.file=${tomcat.dir}/conf/logging.properties"/>
<jvmarg value="-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager"/>
<jvmarg value="-Djava.endorsed.dirs=${tomcat.dir}/endorsed"/>
<jvmarg value="-Dcatalina.base=${tomcat.dir}"/>
<jvmarg value="-Dcatalina.home=${tomcat.dir}"/>
<jvmarg value="-Djava.io.tmpdir=${tomcat.dir}/temp"/>
<jvmarg value="-Dcatalina.port=${catalina.port}"/>
<jvmarg value="-Dcatalina.shutdown.port=${catalina.shutdown.port}"/>
<jvmarg value="-Dcatalina.ajp.port=${catalina.ajp.port}"/>
<classpath>
<pathelement location="${tomcat.dir}/bin/bootstrap.jar"/>
</classpath>
<arg line="start"/>
</java>

<!-- Confirm is running -->
<waitfor checkevery="1" checkeveryunit="second" maxwait="${max.wait}"  maxwaitunit="second" timeoutproperty="tomcat.failure">
<http url="${tomcat.url}"/>
</waitfor>
<fail if="${tomcat.failure}" message="Could not start ${tomcat.url}."/>
<echot message="Launched Tomcat Server ${tomcat.url}"/>
</target>

The java task is the main one to launch tomcat. This is, at least what worked for me, the right set of parameters to launch tomcat 6. The following defines most of the ant properties used here. The tomcat.dir is us to you (we used Ivy to get a tomcat zip).
<property name="env.CATALINA_PORT" value="8322"/>
<property name="catalina.port" value="${env.CATALINA_PORT}"/>
<property name="env.CATALINA_SHUTDOWN_PORT" value="8323"/>
<property name="catalina.shutdown.port" value="${env.CATALINA_SHUTDOWN_PORT}"/>
<property name="env.CATALINA_AJP_PORT" value="8324"/>
<property name="catalina.ajp.port" value="${env.CATALINA_AJP_PORT}"/>
<property name="tomcat.host" value="http://localhost"/>
<property name="tomcat.url" value="${tomcat.host}:${catalina.port}/"/>
This has the advantage the it provides default values, but enables either environment variables or ant properties to override them (we used the Hudson Port Allocator Plugin to accomplish this in a multi-build environment).

Finally, here is how to shutdown Tomcat. The only disadvantage I've seen with this is if Tomcat isn't actually running then it leaves a failure message in the tomcat logs. We've ignored that.
<target name="stop.tomcat" description="Stop the embedded tomcat server">
<!-- this is a little different, no depends and the mkdir. It's this way to still suceed when ivy.clean must occur -->
<mkdir dir="${tomcat.dir}"/>
<java classname="org.apache.catalina.startup.Bootstrap" dir="${tomcat.dir}" fork="true" spawn="true">
<jvmarg value="-Djava.util.logging.config.file=${tomcat.dir}/conf/logging.properties"/>
<jvmarg value="-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager"/>
<jvmarg value="-Djava.endorsed.dirs=${tomcat.dir}/endorsed"/>
<jvmarg value="-Dcatalina.base=${tomcat.dir}"/>
<jvmarg value="-Dcatalina.home=${tomcat.dir}"/>
<jvmarg value="-Djava.io.tmpdir=${tomcat.dir}/temp"/>
<jvmarg value="-Dcatalina.port=${catalina.port}"/>
<jvmarg value="-Dcatalina.shutdown.port=${catalina.shutdown.port}"/>
<jvmarg value="-Dcatalina.ajp.port=${catalina.ajp.port}"/>
<classpath>
<pathelement location="${tomcat.dir}/bin/bootstrap.jar"/>
</classpath>
<arg line="stop"/>
</java>
<waitfor maxwait="60" maxwaitunit="second" checkevery="1" checkeveryunit="second" timeoutproperty="${stop.timeout.property}">
<not>
<http url="${tomcat.url}"/>
</not>
</waitfor>
<fail if="${stop.timeout.property}" message="Still running ${tomcat.url}."/>
</target>
I hope this helps others get more out of their Continuous Test Integration efforts. Please use this code for your own purposes without any warranty.

Friday, January 29, 2010

Abstract != Vague

I've got a bone to pick, so I'll do it here. This is a problem of mine that I have with both developers(architects) and with analysts(customers).

Context: This rant applies to models/designs/architectures of a problem or solution domain. It doesn't apply to the actual coding of a solution (much).

First, some definitions:
  • Abstract: something that concentrates in itself the essential qualities of anything more extensive or more general, or of several things; essence.
  • Concrete: pertaining to or concerned with realities or actual instances rather than abstractions
  • Precise: definite or exact in statement
  • Vague: not clearly or explicitly stated or expressed

Here's the picture:
What does this mean?

Analysts tend towards Abstract-Vague. "Let's not get bogged down with details right now."
Developers tend towards Concrete-Precise. "I need all of the table properties fields."

Fortunately, almost no one tends to Vague-Concrete. Phew.

Un-fortunately, few move towards the most valuable region: Abstract-Precision. That's the most valuable because it offers the least noise (Concrete details) and the most information (Precision).

Here's an example. Squares are types, circles are interactions.
Notice that "Money" isn't concretely defined here. It could be a BigDecimal, double, or Smalltalk Number. Also, that Post Condition is much more precise than a lot requirements documents usually specify.

Ah, I feel better.

Here's my rule of thumb for achieving more precision:
Add elements to your model only in order to exactly say what your audience thinks is really important.
In the example above the balance, amount, and cash properties are only present because a Withdraw requires them to precisely describe what needs to be done.

Copyright references: Skull and Crossbones, Dollars

Thursday, June 18, 2009

Criteria for Innovative Success

I've got this in my head and want to write it down. I'm still tweaking it, and would like feedback.

Criteria for Innovative Success:
  • 1a) A Shared Vision of Success
  • 1b) Willingness to drive towards that Vision
  • 2) Reflective Problem Solving Staff
A Shared Vision is, well, shared. Everyone involved should be able to articulate it, write/draw it on a single white board.

I split 1) into two parts, or more specifically added the second part. I think that just having a vision isn't sufficient, the will to see it through and cut away what's not helpful is critical. I suppose the metaphor for 1b) would be chipping marble away to make a sculpture.

"Reflective Problem Solving" is a phrase I got from "Managing to Learn" by John Shook. I've got more to say about this book and will review it when I can, but having a team of people who refectively problem solve is very different than a team that does what it's told.

In my experience at the Lean Kanban conference I concluded that a Kanban board with WIP limits is a "concrete reflective tool". I'm now on the hunt for "concrete reflective" tools and ideas, with a strong belief that such tools will be adopted by more teams than abstract or prescriptive tools.

Tuesday, May 26, 2009

Presenting at Austin JUG tonight, Kanban

Update: The presentation went well, lots of good questions and conversation.

Link to the slides: http://gistlabs.com/john/pubs/2009/05/AJUG/


I'm presenting at the Austin Java Users Group tonight, on Kanban.

The slides will get posted there and on my company website, http://gistlabs.com, shortly.

Tuesday, May 19, 2009

Setting up our Kanban board

Last week one of my clients and I set up a Kanban board for the team. We did it as a physical board, and we're backing each card with an issue in a tracker.

We plan to use the issue tracker for these purposes:
  • generate a Cumulative Flow Diagram (perhaps scripting an export to CSV or Excel)
  • searchable index of activity
  • release management tracking
  • conversation (comments, emails, checkins) for details larger than a notecard
Here's what the board looks like:
Some details (that are hard to read):
  • A Prioritized Queue on the left, highest priority on top
  • Three workflow/VSM steps: InProgress, Review, Deployment
  • Review is for developer peer review, and Staging (on QA for general review)
  • Deployment Ready is available for production deployment, Deployed is recently deployed.
  • Three swimlanes: two for development activities, one for IT operations support.
This is the current prototype for the cards:
Some of the tools that we use include sticky post-it notes and Stikky Clips. (Note: We found the Stikky Clips at a teacher supply store, not a big office supply store.)