All posts by PTPJimG

Santa Cruz Island pier

Vacation Memories during Lockdown: The Channel Islands

I’ve lived most of my life in California, but until recently I’d never been to any of California’s Channel Islands. This is a group of islands which extends from Catalina and San Clemente Islands in Southern California, north to the Channel Islands National Park off the coast of Santa Barbara and Ventura.

Map of Channel Islands National Park
Scorpion Ranch Campground, Santa Cruz Island

I’ve enjoyed a number of island vacations, including Sri Lanka, the Caribbean, Greece, and Polynesia. But I hadn’t really thought of an island vacation off the coast of California. After hearing friends talk about how much they enjoyed their day trip to Santa Cruz island and watching “West of the West: Tales from California’s Channel Islands”, I decided it was time to go! I scheduled a 5 day camping trip on Santa Cruz Island.

A good place to stay is Scorpion Ranch Campground, which is one of the few places on the island with easy access to drinking water. Reservations are made through Channel Islands National Park. I also arranged to have access to a kayak, which was easily done through Channel Islands Kayak Center. They also offered kayak tours, so I signed up for their “Island Cave Tour,” since the island is famous for its sea caves. I took my own wet-suit, fins, and goggles, to do a little swimming on my own.

To learn more about the Channel Islands, I started my trip with a visit to the Santa Barbara Museum of Natural History. It’s in a very nice wooded setting near the Old Mission. They have information on local flora and fauna and a very interesting exhibit on Chumash life. Through radio-carbon dating they have shown that the Chumash lived on Santa Rosa Island as long as 13,000 years ago and on San Miguel Island as long as 11,000 years ago.

Santa Barbara Museum of Natural History

I then continued on to Ventura. Getting an early start meant being all packed and ready to go the night before and staying close to the Ventura Harbor. There’s no electricity or cell phone reception at the campsite, or anywhere on the island, as far as I could tell. So I also took a couple of books and a head-lamp for the evenings. One of the books, Island of the Blue Dolphins, is based on the true story of the last surviving member of the people who lived on San Nicolas Island. The main character lived there alone for 18 years during the 19th century. I was happy to find it is still a great read.

Transportation to the islands is arranged through Island Packers, which provides regular service to Scorpion Bay on Santa Cruz Island. The boats are catamarans that ride on the surface of the water and are fast.

Ventura Harbor

The ride to the island takes over an hour. Most of the passengers are day visitors and overnight school group campers. The highlight of the trip was watching as dolphins surfed in the wake of the boat. If it’s windy, the boat does its own surfing over the waves. We made brief stops, usually to let pods of dolphins pass. One of those brief stops was to pick up a Mylar balloon floating in the water; the crew made it a teaching moment to tell us about the hazards of pollution to ocean life. Turtles see the balloons as jellyfish, which are a source of food to them.

Also on view are the numerous off-shore oil derricks which still operate. There’s always been oil seeping from the ocean floor. The Chumash used it to seal their boats. But in 1969 there was a major oil spill due to a blow-out on one of the rigs. That event currently ranks as the third largest oil spill in U.S. waters, and marks a significant milestone in the modern environmental movement.

Scorpion Anchorage is the nearer of the two landing locations on the island. It had been a sheep ranch before becoming part of the National Park system.

Scorpion Anchorage

The day we arrived it was foggy, but comfortable. And it stayed that way for a good part of the time I was there. The photo shows our approach; there’s a small pier visible just right of center in the photo. At the end of the pier is a ladder to the pier walkway.

Once we were docked, the camping gear was unloaded first, handed up the ladder by passengers and crew. I carried up my wet-suit, goggles, and fins. While we took care of that, the crew unloaded the kayaks and paddled them to shore.

It was springtime and the flowers were a treat. I wasn’t quite sure what was native or endemic, but there were knowledgeable people I could ask. Volunteers were working to eliminate invasive species such as black mustard and star-thistle.

North Bluff Wildflowers

Once threatened, but now making a comeback is the Island fox. They had co-existed with Bald eagles, which feed mostly on fish. But when the Bald eagle was eliminated in the mid-1950s, the Golden eagle moved in, and they fed on the foxes. A conservation effort in the early 2000s moved the Golden eagles back to the mainland and re-introduced the Bald eagle, which are territorial and keep out the Golden eagle. As you can see, the Island fox is not afraid of humans.

Island Fox

Prior to becoming part of the National Park Service, Scorpion Ranch was a sheep ranch. The buildings became the visitor center. This is a view from Cavern Point trail.

Scorpion Ranch

The next day I checked out Smugglers Road. You can take this for a 7.5 mile round trip hike from the campground to Smugglers Cove. It was used for smuggling by sea otter traders and others almost 200 years ago. It was also used by rum runners during Prohibition. The following photo is a view from Smugglers Road, almost directly across from where the previous photo was taken. (Cavern Point trail is visible in the upper left corner of the photo.) It shows the pier where we landed. The visitor center is just out of view to the left.

Santa Cruz Scorpion Anchorage Pier

About two miles into the hike you can see the remains of an exploratory well drilled by Atlantic Richfield in 1966. They found water, not oil.

exploratory well

I also spent a couple of days on the water, exploring the coastline by kayak and doing a bit of swimming. There are some nice shallow caves to enter, seals to watch, and tunnels to ride through. This photo shows the volcanic rock that makes up a good part of the island.

sea tunnels

Early morning is the best time for kayaking, as there’s only a very light wind. Later in the afternoon, the wind picks up and the water gets choppier and harder to paddle against. To stop and enjoy the view, we’d grab onto kelp or just paddle in place. Although this area is protected, swells do move through.

One thing I learned about kayaking: adjust the seat back so you sit in a more upright position; I’d been sliding down in the seat which made paddling difficult. And be sure to take plenty of fresh water. It’s easy to get dehydrated, even in overcast weather. Drinking water can reduce motion sickness.

It’s a good idea to have company, especially if you’re planning on going into the caves. I haven’t kayaked a lot, and never into caves, where timing is critical. Having guides allowed me to relax and have fun.

group kayak tour

A good resource for additional information about the Channel Islands is in the National Park Service pamphlet “Channel Islands Interpretive Guide, Eastern Santa Cruz Island,” at: https://islandpackers.com/wp-content/uploads/2008/09/Santa-Cruz-Island-Interpretive-Guide-201414.pdf

Boat ride home with oil rig in the distance
Boat ride home with oil rig in the distance

Free Tools for Building Multi-format Documentation: See Our Article in EE Times

Concerned about vendor costs when it comes to producing technical documentation for the web? You might want to take a look at our recent article in EE Times reprinted below. (See the original article at https://www.embedded.com/free-tools-build-multi-format-documentation-systems/.)

Using open source tools, it’s possible to create a documentation system that can present the same information on large and small displays.

In a recent column on Embedded.com, Max Maxfield blasted the slapdash guide that accompanied a module he’d purchased and asked why so many manufacturers neglect documentation until the last minute (see Basic documentation—is it too much to ask for?). Was a budget or staff ever too big to blame for procrastination? Yet even a conscientious development team can’t supply useful help without a well-organized system in place to record work in progress, edit the information, and publish documentation in whatever formats are required for easy access. There are many ways to build such a system based on commercial software, which can be expensive, or by using free and open-source tools.

My firm has generated documentation for more than 30 years for semiconductor fabrication equipment, graphics processors, test instruments, network gear, CAE tools, and so forth. We’ve rescued clients on the brink of product rollout and worked on multiyear projects from the inception. The best path for any manufacturer, whether a startup or an established enterprise, is—before product development begins—to put in place internally a documentation system tailored to deliver information in ways most convenient to customers. Let’s consider a straightforward, low-cost approach one company used to get up and running quickly.

pdf-mobile-phone
Content from PDF (background) scaled for display on cell phone

A long-established manufacturer that produces biomedical instruments and related assays for disease screening sought a documentation system that could output product literature in both PDF and in HTML format for Web presentation from the same source text. The company had built an extensive library of instrument manuals and detailed guides for its many assays, all of which had been produced using the Adobe page-layout tool, FrameMaker, and needed a flexible platform for generating searchable new documents for electronic display that would be virtually identical to their print counterparts. The content, as well as the structure of any of the documents in either format, requires FDA approval subject to stringent federal review.

The project was fast-tracked by a battery-backed portable instrument that was in development. This instrument tests biological samples in the field to quickly determine whether patients have HIV, certain influenza strains, or other infectious diseases. It is intended for use at remote sites and in neighborhood clinics, especially in developing nations, where there may be limited electrical power and economy is paramount. The user interface is a cell phone controlled by a dedicated application, unlike other instruments made by the company, which communicate with a laptop or desktop computer.

Instructions for operating the instrument, as well as the guides for the assays it runs, reside in the cell phone, except for brief startup steps on paper to turn on the phone and open the app. Assay guides run 30 to 40 pages in PDF, posing a challenge how to present the material legibly on a display that is less than six inches long by three inches wide in a structure that could be easily searched.

An open-source standard, DITA (short for Darwin Information Typing Architecture), was chosen as the foundation for the documentation system. DITA, which was originated by IBM, defines an XML architecture for publishing information in multiple formats for print, Web display, and retrieval on mobile devices. Document outputs in the various formats are implemented using the DITA Open Toolkit, a collection of open-source software programs. The upshot of DITA is that content is distinct and independent of how it will be presented, with reordering and reuse in mind.

Some existing assay guides had to be translated from PDF for use with the portable instrument. The content was extracted in essentially a cut-and-paste operation and then tagged in a DITA XML markup. Formatting templates were created so the documentation system would strictly adhere to the FDA-approved style for the company’s literature in PDF, and then the toolkit was used to output files in HTML5. The toolkit can render DITA XML files for output in several formats, including XHTML, HTML5, PDF, and others. Although FrameMaker, the page-layout program, also can export files in HTML, it would actually complicate building a Web portal for documentation: you can’t make inter-document links, for example, or readily create a hierarchy for building a documentation site.

Arranging the HTML5 output from the documentation system for display on the cell-phone screen involved Bootstrap, a framework for automatically scaling websites for viewing on phones, to tablets, to desktop computers. Bootstrap, which is a collection of cascading style sheets and JavaScript, employs a grid for defining how information should appear within different screen dimensions. For a large 4k display, for example, content could be presented in multiple columns, if desired; for smaller screens, how elements shrink, are rearranged, or remain visible can be defined.

In the case of the project we are discussing, content is displayed in a two-column makeup on a widescreen, and in one column on the cell phone with a collapsible table of contents at the top. For the phone, each document section amounts to a Web page. Information is displayed one section at a time. There is always a table of contents that can be expanded for quick and easy navigation through the document. If the table of contents is collapsed, the material for one section is displayed and—when read to the bottom—there is a link to go to the next or previous sections.

During the development of this system, a technical detail tied to regulatory acceptance arose that had to be resolved. When documents are authored in FrameMaker, tables that continue from one PDF page to the next repeat the table title. However, the PDF outputs yielded by the DITA process don’t repeat table titles from one page to the next—just the table headers.

Another tricky issue, general in nature, was how figures are numbered in the HTML5 output from the DITA toolkit. The PDF output is fundamentally a continuous scroll, but the HTML5 output is broken up into sections and the toolkit does not number figures in succession but starts again from 1 in each section. The fix involves adapting a bit of code from the PDF process. Basically, the PDF process produces a file that merges topics mapped in DITA, but HTML documents are collected assemblies of the topics, each of which remain in a separate file. The PDF process merges everything from the DITA map, everything from the document hierarchy, rolled into one big file that is used to count such things as figures. The code that was appropriated from the PDF process modifies the HTML process to maintain the consecutive numbering of figures.

Any plaintext editor can be used at the front end of such a system to create content, though a modest investment for Oxygen, XMetaL, or other commercial XML editor is worthwhile. Those programs are much less expensive than FrameMaker, which is the conventional workhorse for document creation.

Startups, especially, who want to build a flexible documentation system quickly and inexpensively that can publish material in multiple formats from the same content can benefit from this approach, which is based on free and open-source software tools. The only problem remaining, therefore, is procrastination.

Building an Economical, Web-based Documentation System

A client who provides a web proxy service to shield enterprise customers from malware recently hired us to build a documentation system we had proposed to quickly produce content for web presentation. Draft text is shared, edited, coupled to graphics, and automatically tailored for export straight to the web in a smooth, secure process based on free and open-source tools.

The company as a proxy executes customers’ web sessions, removing Flash and other active elements that might contain threats, and then relays the sanitized results to the customers, all without perceptible delay. Their customers need clear instructions for installing the service but when the company approached us, the documentation at hand was a mix of PDFs, Word texts, and a few procedures online, all in need of a thorough technical edit and consistent style and branding. We converted the existing documentation so it would be clearly legible on the company website and structured a system for creating new documents that could be easily uploaded to the website and displayed in any screen format with no need for conversion. We had in mind an infrastructure that would be easy to maintain.

First, we converted the existing documents to plain text and then we used Markdown, a simple mark-up language that provides a syntax for plain-text formatting, to generate the typographic conventions for headings, italics, bold type, numbered lists, unordered lists, and so forth. Moving forward, we developed a consistent style in which documents would appear on the website.

Easy to implement

We based the documentation system on a static-site generator—Jekyll–which takes the plain-text files in Markdown, organizes those according to templates we created for text and style, and creates web pages and a navigation hierarchy. Simply run a command and the system produces a directory of files for upload to the web server and the process is complete.

Jekyll is an open-source static-site generator that was intended for blogs as an alternative to WordPress, which was becoming complicated, cumbersome, and involved managing a back-end database. It is extremely easy to use. Features that make it helpful for blogs also make it a very convenient engine for a documentation system. For example, it can automatically create lists of all the blog posts from an author in reverse chronological order and create tags so posts about a particular subject can be collected on one page. We leveraged Jekyll because it thereby enabled building an organizational framework for the documentation system, not only style templates. For fast website deployment, security, and limiting infrastructure maintenance, we decided a static-site generator would best serve the application.

The workflow is streamlined for document generation and deployment.

Streamlined workflow 

Once we converted the existing documents to plain text, we put them into a private repository on the website Github.com to share and collaborate with the client’s staff. The Github website is based on Git; a version-control system that can pull from many servers. Everyone given company access has their own version of the entire private repository, enabling the staff to work on the files and see each other’s changes.

The client’s chief of publications produced raw content adding to the work we had done, and then we did a technical edit of the new material and submitted changes to him, in an informal ongoing process. However, based on Github, the structure is in place to implement a formal process for merging changes from authors, who can make technical edits and then submit a pull request: I’ve made this change; do you want to accept this into the main branch?

Beyond version control, Github gives a preview of how documents will look. Simply upload a Markdown file with accompanying images and Github melds into order the text and graphics free from compositing symbols. The publication chief can confirm how pages would be organized, set the layout with the Jekyll templates, and commit the results for upload to the company website.

The benefit of using plain text for generating documents, instead of such programs as Framemaker or Word, is that you can tap into the decades of development founded on plain text; namely, all the sophisticated tools that can analyze two very different versions of a text, see where they differ, automatically merge changes, and identify where a conflict must be resolved.

There is some downside of working in plain text, at least for engineers who are documenting product designs in progress. Composing in plain text is not as familiar as writing in Word. However, making the transition from Word to composing in plain text is not especially onerous.

Given the simplicity, convenience, and low cost of using this combination of free and open-source software to build a flexible and robust online documentation system, the approach can benefit many ventures, especially startups who have limited IT resources, small budgets, and pressing calendars.

Writing for Enterprise Mobile Apps

Building a New Publishing Process While Developing Content for a Future Product Release

We recently completed the first phase in a significant new project. It required designing, building and implementing a new documentation process, while at the same time developing content for three new products.

Our client builds mobile application software development tools. They were looking for a tech writing company that also could make their documentation process more agile and productive.

The process they were using relied on structured FrameMaker which then was converted to HTML and PDF files. But they found this slow, hard to manage, and outdated. Our role was to design a new content development and workflow process and to assist in its implementation.

At the same time our tech writers would be working with their software developers, who are located at different regional centers. We would have to develop a working relationship with remote experts and collect, develop, update, and publish content in a timely manner.

The group assigned to the document publishing process set about exploring different publishing processes. The group assigned to developing content immediately started working with the subject matter experts.

The Publishing Process

To start, we examined several options for publishing content to the web. These included Python, Sphinx, DocBook, Slate, and others. We built demo sites to show their look and feel. And we evaluated their ease of use,  their output templates, their suitability for the client’s resources and processes, and their scalability.

As a software development company, the client was already using Atlassian’s enterprise software tools for issue tracking and team collaboration. The Developers, our subject matter experts, were comfortable working within Confluence, the Atlassian wiki, to document the products. And Confluence uses a REST API, so the wiki could be queried and content pulled from it.

We realized that if we could continue using that wiki, the Developers would be much more responsive in delivering the content we needed. We also could use JIRA, the Atlassian issue tracking tool, to monitor issues blocking content development.

The goal then became to build a process that used the wiki and integrated seamlessly into their workflow. The result would allow the client to be in complete control of the process, without having to learn any new technologies or tools.

However, important issues remained to be solved.

Because they had in-house NodeJS resources, we proposed using DocPad as the interface to pull the content from the wiki. To get this to work, we built a special DocPad plug-in to automate the query and pull the content from the wiki. The content was rendered to templates built for customer-facing content and then deployed to their AWS servers.

The following screenshots show the content as developed in the Confluence wiki and the output as it appears on the company’s web site. Changes made on the wiki can be scheduled to appear on the web site whenever you want.

Confluence wiki page Public-facing content
Image1 Image2

So what does this mean for you? First, we listened to what the client wanted. We worked with them to determine the best approach to the problem, and we made maximum use of their in-house resources. The result was a process that they completely control. At any time, anyone in the company can view the current status of the content …. There’s no hiding, so you’ll always know if the content will be ready with the product at release time. And when it’s ready for release, it can be pushed to the web site and packaged with the product build as part of the SDK. Fast …. Easy!

By the way, if you’re interested in the DocPad plug-in, you can get it at NPM (Node Package Manager):

https://www.npmjs.com/package/docpad-plugin-conflux

GitHub:

https://github.com/phoenixtechpubs/docpad-plugin-conflux

Developing Document Content

Of course, what good is a process if you don’t have content to publish? The first issue the tech writers had to resolve was developing a working relationship with subject matter experts located in three geographical regions. Site visits could be made to the local office, but video conferencing tools were needed for remote sites. Two that we found useful were Google+ Hangouts and join.me by LogMeIn, Inc.

Working with the subject matter experts, the tech writer created wiki pages for the product content. Links to those pages then were sent to the subject matter experts so that they could write, review, and correct content. The “Watch” feature in Confluence alerted participants to any changes; the “Talk” plug-in was used for contextual Q&A exchanges. The tech writers ensured that the content was written with a consistent editorial style, that it fit within the overall organization of the material, and that there was a smooth narrative throughout.

During the development process, the wiki content was published to an internal review site so that anyone with authorization could see what it would look like when published to the web site. This was important because the template for the public-facing content is branded differently from that displayed in the wiki.

Document control was handled using Comala, the Atlassian workflow product. This ensured that the content went through the company approval process before being released to the web site.

So how did this work out for the client? They were never in doubt as to the availability of the documentation when it came to product release. And those last minute changes that came in were no problem at all.

We’ve Moved!

That’s right, it was time to make some improvements. After so many years in our last location, and a major project behind us, it was time to freshen things up.

Our offices now have a bright new appearance overlooking Willow Glen’s bustling Lincoln Avenue. And it’s not just about the new look. We’ve also upgraded all our office systems, including our file sharing services, our phone and internet systems, and our computer network. Our data speeds now are faster than ever, and our security meets the highest standards.

Of course all this had to happen while work continued without interruption. So a strong Thank You! to the Staff for your perseverance during the change. And a hearty Welcome! to our Clients. Drop by anytime and see the change. I’m sure you’ll like it.

Keeping business operations going while the new office is being preparedImage1aImage1b

 

 

 

Wiring the new office space with gigabit Ethernet and a fiber Internet connection.Image2b
Image2a-1

 

 

 

 

 

 

 

This is as clean as it will ever look. Nothing like a new space.Image3a Image3b

 We’ve arrived! Welcome to our new location. Drop by and say hello.Image4a

Tech Pubs: Stuck with a Bucket of PDFs?

For years, technical documentation has been developed using word processing or page layout tools, such as MS-Word, FrameMaker, Quark, and InDesign. The output is a book-length document, typically released as a PDF, designed to be read front to back. But PDFs have significant limitations, both to your customer as well as to your sales and marketing groups.

Typically, customers don’t first read the technical documentation cover to cover and then use the product. They rather refer to the documentation when they are looking for an answer to a problem they’ve encountered with your product or technology. More often than not, they’ll start by searching the web for an answer. With luck, the PDF appears among the first in the list of search returns. But that’s left up to the Google search algorithm, which is not something you can control.

Then the PDF must be opened for the content to be viewed. This requires opening a separate Acrobat Reader application or a browser plug-in, which can take time. You can optimize a PDF to reduce its file size. But if the document is very long, or it includes heavy use of artwork, it can be slow to open.

Then the user must repeat the search within the PDF. If the answer isn’t in that PDF, then the search process must be repeated. You can hope they find an answer quickly. Or you risk an unhappy customer.  And customer satisfaction affects future sales.

Then there is the static nature of PDFs. While considerable resources may have been required to develop the content, that content is not easily shared throughout the company. Worse, you lose significant valuable information regarding customer use and feedback. And that’s another hit to your sales and marketing groups.

So what’s the option? Use the tools that optimize documentation for the web. XML is the document markup language for the internet.  XML documentation output can be a web page, as well as a PDF. Documentation produced in XML has powerful capabilities, including:

  • Faster access. Technical information is presented in smaller “chunks,” produced and displayed using native internet and web-based tools.  Not only is it more quickly accessed by the user, it’s also easily edited by authorized users.
  • Automated formatting. Most markup languages use separate stylesheets to format the output, such as CSS and XSL-FO. These define the corporate tech pubs style guide. Once these files are created, formatting is automatically applied across all XML content files. Content development moves much more quickly and freely.
  • Web analytics. It’s now possible to learn how your customers use the content you provide. This can be of huge value in identifying the strengths and weaknesses in the material. It also can improve the product.
  • Customer feed-back loops. Great products still require careful attention to customer satisfaction. It’s now possible to set up your documentation system so that customers can respond with their comments, questions, and suggestions. This enhances the customer experience, a critical factor in sales and marketing.
  • Version tracking. Because XML document files become part of your source code or content management system, it’s much easier to follow the evolution of the content. Between the first draft and the last, did something critical get lost or misrepresented? Simply search the earlier versions.
  • Shared content: At its most basic, every document contains copyright and contact information. If information changes, each of your documents must be updated. Who has time to go back and update PDFs? With XML content, this information can be maintained in a single file. Update that file, and you’ve updated all of your content.
  • Collaborative development. The internet is a powerful collaborative and social tool. Because formatting is left to the stylesheets, authorized users can participate directly in the documentation process using a simple text editor. Using version control, tech writers and editors can see what changed.
  • Access control. Access to content is more easily controlled. For example, user content can be more easily distinguished from System Administrator or Developer content. There is no longer a need to produce separate PDF documents.

There are a number of options for implementing web-based documentation. Choosing an implementation depends on the nature of the content, the version control/content management system available for organizing and tracking the content, and the desired output.  The more popular implementations are:

  • DITA. This is more amenable to the specialization/definition of information/content architecture. It is better for massive collections of interrelated topics.
  • DocBook. This was originally intended as a way to manage single books or articles. It’s better for projects that focus on a single large publication.
  • CMS/Wiki. This allows collaboration on multiple unstructured content.

Interested in learning more? See our blogs An Introduction to XML and DITA, Technical Documentation Moves Toward Live Product Content, and Editing Equations in Oxygen XML Editor for more information.

Jim is the owner of Phoenix Technical Publications. Phoenix Tech Pubs has provided complete technical writing and documentation services in Oakland and the San Francisco Bay Area for over 25 years.

Jim’s Alcatraz Swim, September 2011

This was my final event of the year, The Alcatraz Invitational Swim from Alcatraz to Aquatic Park. It’s about 1.25 miles if you swim straight. In the past I haven’t been so lucky, either because of the currents, the fog, my crappy swim technique, or all of the above. This time conditions were ideal: beautiful day, minimal current, slightly improved swim technique. I was hoping to improve on my time of 1:01:57 from last year. I again had the company of my sister, Maria, and my niece, Nina, as well as quite a few friends at San Jose Swim & Racquet Club in my Willow Glen neighborhood. And this time I stopped to take plenty of photos, using a $20 disposable waterproof camera. Continue reading Jim’s Alcatraz Swim, September 2011

An Introduction to XML and DITA

XML (Extensible Markup Language)-based documentation is getting a lot of attention as a better way to develop and disseminate content than tradition technical writing methods. Bob Boiko, from the Society for Technical Communication, writes that XML-based development can “transform what you do from documentation to delivering information products that drive your organization forward.” (Intercom, April 2007) XML offers many potential benefits, not only for the traditional end user – the customer – but also for support personnel, marketing staff, engineers, and more. Continue reading An Introduction to XML and DITA

Ironman Canada, August 2011

This is the third of my four events this year. I was exhilarated after completing the Boston Marathon and the California Death Ride. I felt comfortable that my training would get me through the event: My speed work was done while training for the Boston Marathon and my endurance training was done while training for the Death Ride. And most important, I’d gotten through all my training without injury. But I must admit that I was feeling the fatigue from training that had started before Christmas last year. When it came time to tapering for this race, I had no problem taking it easy. I was glad to have the training behind me and looking forward to the reward of racing.

 

Continue reading Ironman Canada, August 2011

Death Ride, July 2011

Jim signing the board at the top of Carson Pass.

So I completed the second of my 4 events for the year – the California “Death Ride“, so named because it covers 129 miles over 5 mountain passes, totaling 15000 feet of climbing, all at high elevation! It is not a race: there were no timing mats to cross, no split times. In fact, there were numerous rest stations where the riders took the time to get off their bikes to rest and eat and chat with other riders. But with all those miles and all those climbs, finishing before the cut-off would be my challenge. Continue reading Death Ride, July 2011