The Utility of Style Guides

By Craig Dodman

What is a Style Guide?

Style guides are a widely used reference document that contains rules and recommendations for writing content. Style guides will feature specifications for

  • the formatting of titles, lists, and tables,
  • the language or version of English should be used,
  • the unit of measurements used,
  • legal requirements,
  • the phraseology for any notices of warning, caution, or danger,
  • the templates for different types of information blocks, and
  • the application of fonts, highlights, bold, and italics.

There is a wide range of other topics encompassed by style guides, but some things should be omitted. Jean Hollis Weber’s article on Tech Whirl, Developing a Departmental Style Guide is extensively detailed and gives a robust list of items that should (and should not) be detailed in a style guide.

Style guides are used effectively by many companies. Some great examples such as Microsoft and Apple style guides are widely known for their standards and publishing aesthetic. The following article intends to detail the benefits of using a style guide, some of the possible routes a document department can pursue to incorporate style guides within their practice, the nuances of developing effective style guides, and some final thoughts on why style guides are invaluable to technical writers in the context of Content 4.0.

Benefits of Using Style Guides

Style guides serve an important role in a documentation department. The ability to reference a centralized document allows technical writers to streamline their workflow and rely less on individual judgment calls. Without having standards, each document has a higher likelihood of having inconsistencies, errors, or divergences in tone. These inconsistencies can make texts harder to read and navigate.

Adopting a style guide allows a document department to

  • ensure all experiences with the text are congruent,
  • create a body of documentation that has a consensus on terminology,
  • condition readers to expect certain formatting patterns,
  • reduce individual decision making for writers, and
  • greatly reduce disputes between writers and editors.

It requires some time and labour to incorporate style guides in the workplace, however, the value of that labour pays dividends over time.

Developing Style Guides

One of the best aspects of style guides is that they are infinitely adaptable. Regardless of the demands of your specific field, a style guide can be tailored to unify the voices of everyone writing for the company. A documentation department can do the following:

  • use an existing style guide,
  • adapt an existing style guide to meet their needs, or
  • create a new style guide.

Creating a style guide can be a laborious process but can pay dividends in better quality management and lowered production time. Controlling the use of specific terminology can be of incredible importance for some businesses. The Microsoft Manual of Style is a great example of a guide that places great emphasis on the usage of terms. When writers are consistent in the usage of their terms under the guidance of a style guide, they increase the reader’s ability to navigate, comprehend, and utilize the information in the document.

The repercussions or the intended user responses are what determine the severity of the guide’s recommendations, and the writing style should reflect that severity. A guide that articulates best practices for customer service does not carry the same consequence as safety protocol that could result in bodily harm. For example, the recommendations differ for writing notes of best practice and for writing a warning. The Precision Content Standards Guide are as follows:

Best practiceTo satisfy a client over the long term, be consistent and communicate regularly.
WarningDo not smoke within 10 meters of the door.
Precision Content Standards Guide, 22

The imperative sentence structure for the warning makes the required action unambiguous and immediately recognizable. When style guides are used reliably throughout a document department, a user who becomes familiar with the conventions will naturally be able to scan for particular forms of information.

Making Style Guides Effective

Developing effective style guides can offer challenges to any documentation department. Between navigating the multiple writing philosophies and creating a unique voice for your brand, spearheading this kind of project can be rather difficult. Here are some useful suggestions when you are developing style guides for your writing department.

Style Guides are Not Absolute

Rules may need to be broken during unusual circumstances. The guide should be considered the default formation for any writing in the department, however, exceptions should be made with reasonable justification. Codifying when exceptions are to be made will further improve the style guide, although anticipating these exceptions may be difficult or sometimes impossible.

Overly Controlling Style Guides are Less Usable

A style guide should formulate the important terms and subjects to reduce ambiguity or misunderstanding. Overly developed style guides meet a point of diminishing return, as an overabundance of rules makes the guide less usable for the writer and offers less noticeable benefits to the document. A style guide should not encroach on the writer’s ability to craft a document suitable to the needs of the user and the topic.

Focus on Style

Avoid creating rules for non-style-related things, such as the use of graphical elements, choice of software, and procedures related to reviewing, publishing, or archiving the document. The style guide should be focused solely on the written style of the document. Let those who are responsible for graphic or publishing decisions make those decisions.

Style Guides and the Future of Technical Writing

Style guides are valuable to technical writers now and will be even more so in the future. The concept of Content 4.0 offers insight into what makes contemporary technical writing different from technical writing in the past. Writers and theorists have described the content industry as having gone through cycles of development in relation to other fields such as industry, the web, and information.

An insightful article on Content 4.0 can be found on Joe Gollner’s blog, The Content Philosopher. The article details what differentiates Content 4.0 from past iterations and covers the development of several other industries to give reference to the development of Content 4.0.

The following features can distinguish Content 4.0 from past modes of production:

  • content delivery is automated,
  • content is being broken down into smaller molecules, and
  • content is ideally created for reuse.

Since Content 4.0 requires the text to be reusable, the text itself must be flexible to be used in any visual representation. Style guides can aim to standardize the language itself so that it can work with any visual presentation or page layout. This makes style guides a great companion to writers creating within Content 4.0’s framework.

Style guides are an effective companion to Content 4.0 because they can be used to

  • regulate the written language without affecting visual output,
  • simplify complex language,
  • produce content readable and translatable for machines, and
  • guarantee a singular voice is present through a large number of content molecules.

Style guides standardize the language that the writer is using, while not impacting the visual elements or publishing output. Rather than relying on past examples or the writer’s judgment, the style guide codifies these decisions in text. Style guides can be taken from established sources such as the Chicago Manual of Style or can be radically customized to fit the niche of your particular field. Style guides serve a vital role to optimize production within documentation departments and will continue to do so throughout the era of Content 4.0.

About the author

Craig Dodman is a technical writer and App developer. He is an active content contributor and editor for the STC Toronto Chapter blog. You can connect with Craig on his LinkedIn page: https://www.linkedin.com/in/craig-dodman-techcomm/.

Review of “Automated Transcription” by Bokhove and Downey

By A. V. Howland

Introduction

As a freelance transcriptionist, there’s nothing that delights me more than when a job comes in and it’s already half-complete. It’s easy to tell when a previous worker has been using voice-to-text software to do their job for them. The malapropisms and lack of division between speakers are a dead giveaway, but I honestly can’t blame them for using voice-to-text software. The job of a transcriptionist often pays a low salary despite the tedious, demanding nature of the work and the skills required to do it well. As a transcriptionist, automation can help lighten my workload significantly.

Voice-to-text AI can also be very useful in academic research and educational contexts, saving time and money for researchers, and giving hard-of-hearing/deaf students, ESL learners, and visual learners a needed leg up in their lessons.

Christian Bokhove and Christopher Downey advocate for the use of automated captioning services in their article “Automated generation of ‘good enough’ transcripts as the first step to transcription of audio-recorded data”, published in Methodological Innovations’ May-August 2018 edition. Through the authors’ experiments and their investigation into academic literature about transcription, they conclude that automated captioning services are well worth using, especially as a rudimentary tool to jumpstart the transcription process, with editors to clean up the rough draft after its creation.

Business man holding telephone handset with alphabet letters flying out spreading news

Article summary

The state of transcription

Bokhove and Downey begin by laying out the current state of transcription in academia and describing automated captioning services.

Automated captioning services (ACS) are common technology but have had little integration into academic research. ACS could be extremely useful for transcribing a rough draft of audio recordings. Since there are many transcription styles and most transcriptions need several passes of editing and standardizing to meet certain project-specific guidelines, creating a draft using ACS would save time and money for researchers (1-2).

Some academics doubt the usefulness of transcribed interviews due to their debatable value and the time-consuming nature of the work. ACS can reduce the time and labour of creating a transcript manually. Bokhove and Downey also posit that ACS lacks the human bias of individual transcribers’ interpretations of audio (2-3). (I would like to point out that bias can exist in technologies because they were created and designed by humans, who have bias.) Transcription for research is always a trade-off between quality and resources, but using ACS could help bridge that gap by providing free “good enough” first drafts of transcripts that editors can then adjust (2-3).

Methods and accuracy of captions

Currently, researchers can obtain captions through professional captioning companies, transcribing their captions as subtitles to a video, or through automated captioning services. Auto-captioning services are the least accurate, as their technology is developing and improving. Also, the accuracy required for a transcript depends on the transcript’s purpose (3-4). Accuracy varies widely among different types of automated captioning services and voice-to-text programs, from bad enough to be incomprehensible to being comprehensible (4). Any automated transcript will require editing, but the authors believe that the trade-off between free time and accuracy is worthwhile. A way to improve accuracy is for the researcher to read the text aloud while using voice recognition software that has been trained to their voice, which is what is done in live TV captioning and court reporting. Still, while voice recognition software takes less time than human transcription, it has trouble transcribing multiple speakers’ voices and is much more time-consuming than letting an ACS program run (4).

Academic writing about transcription

There is not much academic writing about using automated captioning services or voice recognition software to assist transcription. Most writing that exists pertains to the use of transcription in educational contexts and has proved that captions of any kind make information accessible to deaf/hard-of-hearing students, increase understanding for special needs and ESL students, and help with student note-taking (4).

The direction that Bokhove and Downey take their experiment has previously remained unexplored. This is likely because easily accessible automated transcription tools are a relatively new technology, so the authors’ research is in a new area.

This small pool of academic research points to a gap that needs to be filled by more research and inquiry into automated transcription services and their potential uses. More academic research is needed to improve content accessibility, especially because academic research gives legitimacy to the topic of transcription and the solution of automated transcription tools.

Authors’ experiment and methodology

The authors wanted to create a proof-of-concept for their idea of using ACS to create a “rough draft” transcript. They used three different sources of audio—single person interviews, a group meeting, a classroom lecture—to test how ACS handles different scenarios and qualities of audio, including variables like background noise and multiple speakers. Each audio source had its transcript, manually and accurately produced, against which the authors compared the transcript created by the ACS (5-6).  Bokhove and Downey uploaded each audio source to YouTube and used YouTube’s automatic captions feature to get their transcripts (6). Then they downloaded the text file for the caption, removed the timestamps, and compared the YouTube transcript of each to its professional manual transcript using Turnitin (7-8). 

Findings

Through their experiment, Bokhove and Downey found that free online automated captioning tools produced reasonable first draft transcripts to be used in academic research. Since manual transcription takes 4-5 hours for every hour of audio, using ACS would free up that time to be used for other value-adding tasks, like editing, formatting, and proofreading. Depending on the audio quality, around 66-90% of the audio was accurately transcribed (10). The authors discovered that most automated caption errors were relatively small and easy to fix (8). Areas that ACS struggled with are jargon, numbers, slang, and certain word sounds (9). Overall, researchers could save a noteworthy amount of time by starting with an automated transcription instead of doing everything manually.

Author recommendations

Bokhove and Downey recommend that researchers consider using an automated transcription process for projects that require long transcripts (10). They say that any of the available free transcription software can be a viable option to produce first draft transcripts, noting that those drafts should be looked over by human editors to correct any transcription mistakes made by the ACS (11).

My opinion

While I cannot speak to the more technical aspects of the article, such as different software, I find the authors’ recommendation to be a useful one. I can speak to how long transcription can take, and correcting a rough draft is much easier than transcribing every word. Using a program to roughly transcribe an interview or recording saves time and makes editing and formatting the transcript easier.

The authors investigate using ACS for transcription in the context of academic research, but the potential uses go far beyond that situation. Transcription has a multitude of uses especially in the field of technical communication. For example, transcription is crucial for creating accessible audio content or for accommodating different learning styles in education. It would be a good idea to look further into how ACS-based transcription could be relevant to your work.

Conclusion   

The authors’ recommendation is a valuable one—researchers have so much to gain from integrating ACS into their research practices. Many automated transcription programs are free, simple to use, and save large amounts of time.

To summarize, transcript drafts created by automated captioning tools and edited by humans can be used for research (10). Though the drafts have a wide range of accuracy, they can save time and money for researchers (4, 2), and thus the authors recommend that researchers at least consider implementing this process before embarking on large research projects. If researchers started using ACS to create a first draft, it would mean as a transcriptionist, I can redirect my efforts to editing and fine-tuning the draft for quality, rather than the time-consuming process of manual transcription.

Beyond automated transcription’s uses for academic research, it plays a much larger role in technical communication as a whole. Transcription and captioning allow information to be accessible to everyone. This is important not only for disabled people to access information, but also for content accessibility affected by platforms, devices, languages, and the environment. Since information is increasingly presented in video, audio, and other non-text formats, captions are necessary to make the non-text information accessible. Besides, online content is constantly growing in the digital age, necessitating transcription for captions. For example, there is e-learning, online conferences, online meetings, TV streaming services, and internet videos. In this pandemic, with online information and media becoming so prevalent, transcription and captioning are more vital than ever for information accessibility.

Bokhove and Downey’s investigation into automated transcription for research purposes opens up a new area of exploration not only for academia but also in the technical communication field.

Consider the technical communication field. How accessible is the content you create? How could transcription change the content you produce, and how will transcription affect it in the future? Start adding closed captions to videos and transcripts to audio posts. Get ahead of the transcription curve and make your content accessible easily with ACS.

About the author

A. V. Howland (they/them/their) uses their writing skills to help all kinds of projects reach their full potential. Their studies at York University have combined with practical work experiences as a library page and IT assistant to sharpen their versatile writing skills and eye for detail. Alongside an editing team, they edited The Game by Joel Lavigne (published April 2021) in their Publishing Practicum class, demonstrating their collaborative editorial skills while meeting close deadlines. A. V. is working on their Bachelor of Arts in English Professional Writing and looks forward to graduating to fully join the professional world. You can contact them on LinkedIn or find more of their work at their website, koi-caper-trcz.squarespace.com.

DITA Information Types

By Tommy Nicolls

DITA (Darwin Information Typing Architecture) is a versatile and widely used standard for content creation that is usually implemented using an XML editor. It is a structured authoring standard, or framework, that separates content and format. One of the major characteristics of DITA is that the smallest unit of content is a “topic”. A DITA topic is freestanding, reusable content that is focused on one specific subject. Standard DITA topics are categorized into three information types: concept, task, and reference. The process of dividing topics into categories based on their content is referred to as “information typing”.

Concept, Task, and Reference Topics

Each topic has one specific goal based on its information type:

  • Concept topics help the user understand an idea or the purpose of an instruction. They often provide the user with background information that they will need before they begin a task.  An example of a concept topic is an article explaining a new type of computer program and its features.
  • Task topics help the user to do something, typically with step-by-step instructions. Task topics typically use numbered lists to present instruction steps. An example of a task topic is a set of instructions on how to build a new chair.
  • Reference topics give the user descriptions about something without explanation. This differs from a concept in that it does not require the reader to totally understand something; it is more concerned with providing them with facts. An example of this is the nutritional information for a beverage. Often this information is presented as a table.

Each information type has a standardized structure to serve its purpose. Because each topic has its clear and distinct goal and structure, it is easy for a user to find the exact information they need. This is what makes content accessible and usable. 

Benefits 

Using DITA topics  has many benefits. Some of the most valuable are reusability, scalability, and consistency. DITA structured authoring creates content that can be understood as standalone topics or within a larger context. One DITA topic can be reused for multiple publishing outputs, so there are no similar but different topics that could propagate discrepancies or inaccuracies through future edits. Content reuse, also known as single-sourcing, enables a change in one topic to be reflected in all the publishing outputs. DITA’s predefined structure allows the author to focus on the content itself, while still maintaining a consistent structure.

Resource for Learning More

Topic, Concept, and Reference are the most widely recognized DITA information types. There are also other types of DITA topics that have different standards and goals than the topics listed above. You can learn more about the other kinds of DITA topics, as well as how to use them, by signing up for a free course at Learning DITA.These courses will give you a better idea how DITA topics present content in real-world settings. 

STC Toronto to Launch the Revamped Job Bank Site

By STC Toronto Council

STC Toronto are thrilled to announce the upcoming launch of the newest iteration of our Job Bank! We’ve been working on it for several months and we are now working on final preparations for our launch. We’ve redesigned it from the ground up and built it on WordPress to take advantage of all the functionality and security this platform provides. 

Here’s a peek at you can do with the new Job Bank: 

Job Searching 
  • View all listings 
    • View all current job listings on a single page.
    • Filter jobs by Contract, Permanent, and other gig types.
  • Search for what’s important for you.
    • Search by keyword, location, and more.
    • See job locations on a map.
  • Bookmark jobs.
    • Save jobs you’re interested in and look at them later.
  • Apply for jobs.
    • Now you can apply for the job directly from the job listing!
Post Your Resume
  • Create your own online resume.
  • Employers can search by keyword, skill type, location, and more.
  • Apply to jobs directly from the job listing by sending along your online resume.

Best of all, use of the new Job Bank is included with STC Toronto membership! Members don’t have to pay anything to use it as long as they maintain their STC Toronto Chapter membership when they renew their STC membership. 

How do I get access to the new Job Bank?

We will send STC Toronto members an invitation by email very shortly. When you receive this email, be sure to click the link to set up your account and choose your password. Then watch for the confirmation email, and you’re good to go!  

If you’re not a member of STC Toronto, you can buy an access pass just to access the new Job Bank. Prices will be available on the new Job Bank when it is launched.   

We think you’ll enjoy all the new functionality available in our new Job Bank

Happy hunting! 

Meet Kay Kazmi — the New Vice President of STC Toronto Chapter

Kay is an accomplished technical communicator with over 12 years’ experience documenting hundreds of technical processes, training others to duplicate results, and helping beginners find their bearings in the world of technical communication. Kay fell in love with technical communication, as it combines her twin passions –English language and Technology.

Currently, Kay is a Content Specialist at Precision Content Authoring Solutions Inc., a full-service solution provider to medium-and large-scale organizations around the globe seeking help to better understand and solve their content challenges. Before joining Precision Content, Kay led successful technical writing and training deliveries in several startups and multinational companies, across industries including software, finance, telecom, and healthcare. She also spent nearly a decade with IBM as a senior information developer, helping IBM’s clients solve their documentation and training challenges.

Kay’s personal style is to lead by example, always doing her best and encouraging others to do the same. She uses her positive attitude and tireless energy to encourage others to work hard and succeed. She strongly believes that the best way to ensure the growth of the field of technical communication is for the community to help each other learn and grow. You can accomplish such knowledge sharing, only if you really connect with people and communicate your understanding to them.

While she enjoys a good Netflix binge, Kay also loves to unwind with a warm cup of coffee and a good book. On somedays, she can be found enjoying a quiet stroll with her husband in the parks and promenades of her neighborhood.

Kay earned her MA in English Literature from Christian College, Lucknow University.

Understanding Microcontent and Its Effects on Technical Writing

By Craig Dodman

The term microcontent originates from usability advisor Jakob Neilson, as he coined it in an article in 1998 (Neilsen & Loranger, Microcontent: How to Write Headlines, Page Titles, and Subject Lines). The term has been adopted by many fields including technical writing, marketing, and UX/UI. The following article will explore the topic, its relevance to technical writing, and its prospective benefits to users and writers alike.

Defining microcontent

In a brief definition, microcontent can be described as “text, image, or video content that can be consumed in 10-30 seconds” (Lorrie McConnell, Microconent and What It Means for Communication and Technical Writing). While the amount of time it takes to consume media is important, it is not the only defining characteristic.

In addition to this, I would refer to a definition supplied by Rob Hanna (Supporting information-enabled enterprises: Reengineering for better flow with microcontent). He notes that microcontent is content that is

  • about one primary idea, fact, or concept
  • easily scannable
  • labelled for clear identification and meaning, and
  • appropriately written and formatted for use anywhere and any time it is needed.

Microcontent as a methodology of creating content has grown in popularity recently. It is a topic that is often associated with marketing and DITA authoring; however, the aforementioned definitions formulate microcontent as an approach that is widely applicable to different types of communications.

Why is microcontent becoming popular?

As all industries change because of technological advancement and cultural shifts, content is likewise changing. Content is being produced, maintained, disseminated, and consumed differently than it was twenty years ago. Users want to find and use information as fast as possible and are often not willing to navigate poorly designed documents. Technical writers need to adjust their content, their writing, and their technology to keep up with the new demands. Developments in chatbots, machine translation, and single sourced publishing demand new content formats that microcontent can provide. 

Three benefits of microcontent

There are three characteristics of microcontent that benefit both the content creators and their users.

Focus

Microcontent requires that each block of content be focused solely on one idea and fulfilling one purpose. Unrelated content is either cut or moved into separate content blocks. The user spends less time navigating and reading, more time applying the information.

Enhanced searchability

In the current digital landscape, the content’s searchability is a determining factor for the content’s usability. If information is not easily searchable it will not serve the users even if it is usable. Since the microcontent approach rigorously enforces the consistency of terminology and content labelling, precise search results can be delivered to the users. Users do not need to scan through irrelevant information to find what they need. In addition, this also helps technical writers to develop and maintain content throughout development cycles.

Improved navigation

Users employ a range of reading strategies when engaging with technical documentation. Most reading strategies are not linear, as is detailed in Tom Johnson’s article on I’d Rather Be Writing (Johnson, How to design documentation for non-linear reading behavior). Users often search for a point of information and then branch to related queries. Microcontent’s molecular nature allows readers to easily find needed information and navigate to other related content. Each topic can contain links to related information and can be referenced in each file. Analyzing and predicting the user’s needs and search patterns is essential to creating a functional network of content.

Applying microcontent methodology

Creating well designed microcontent requires that information be

  • properly chunked
  • not reliant on external information
  • not reliant on circumstances

Consider this example: an electronics company is producing a line of keyboards. Each higher-end model retains the features of the model beneath it. The documentation department is creating the feature descriptions as microcontent which will be reused in each model’s documentation. This is a good example of microcontent because the content

  • focuses on one feature
  • assumes no context
  • is accessible, usable, and reusable

Well designed microcontent makes for well designed technical communications because it allows the users to reference information to a specific end. When an entire document is chunked properly and uses this format, it makes the document as a whole easier to scan and utilize.

Challenges to creating microcontent

There are difficulties in developing microcontent. Difficulties can arise from

  • chunking information properly
  • making content that is not dependent on context
  • anticipating changes

These challenges can result in problems for writers as well as users. Chunking information can be difficult. Some topics are more contingent on supporting texts, circumstances, or settings, and writing them as isolated microcontent is no easy task. It can, however, be accomplished by strictly adhering to the topic format, wherein each block of content must serve one purpose, and then using links for references and related topics. 

When technical writers reuse content that is not designed for reuse (which is dependent on circumstance or context), the users may find the information incomplete, inconsistent, or confusing. Furthermore, to design reusable content requires control over the project as well as the document’s continued adaptability to changes in product or services, which can require hours of labour to make corrections and to redraft content blocks. Anticipating and managing changes can pose a challenge to maintain the integrity of reusable contents.

Conclusions

Microcontent is a nuanced methodology for technical writers to produce content that is optimized for contemporary users and technologies. It creates a focused network of searchable writing for users and allows writers to create and maintain documents using the technological advantages of Content 4.0. 

Technical writers already use many of the strategies that are employed in microcontent, however, they may not always do it with rigor. By adopting microcontent methodology, technical writers need to actively consider how information is chunked, grouped, and linked together, as a result, both users and technical writers will benefit.

References

Hanna, Rob. “Supporting information-enabled enterprises: Reengineering for better flow with microcontent”. Precision Content. 2019. http://www.precisioncontent.com/wp-content/uploads/2019/11/RHANNA-Chatbots-CSA2019.pdf

Johnson, Tom. “How to Design Documentation for Non-Linear Reading Behavior.” I’d Rather Be Writing, Tom Johnson, 15 May 2015, idratherbewriting.com/2015/05/15/writing-for-users-who-read-non-sequentially/.

McConnell, Lorrie. “Microcontent and What It Means for Communication and Technical Writing”. Best Practices in Strategic Communication, 18 Apr. 2019, blogs.chatham.edu/bestpracticesinstrategiccommunication/2019/04/18/microcontent-and-what-it-means-for-communication-and-technical-writing/.

Neilsen, J., & Loranger, H. ” Microcontent: How to Write Headlines, Page Titles, and Subject Lines” 2017, January 29. Retrieved September 14, 2020, http://www.nngroup.com/articles/microcontent-how-to-write-headlines-page-titles-and-subject-lines/ 

About the author

Craig Dodman is a technical writer and App developer.

LinkedIn: https://www.linkedin.com/in/craig-dodman-techcomm/

Season of Docs 2019: My Foray into Technical Writing

By Audrey Tavares

Explaining Season of Docs to family and friends was more difficult than it should have been.

Them: So… you’re working for Google?
Me: No, I’m working for an organization named Oppia.
Them: What’s Google got to do with it?
Me: They organized the whole program.
Them: Who’s paying you?
Me: Google.
Them: So… you’re working for Google?

‘Tis the Season

With the inaugural launch of Season of Docs in 2019, Google yet again establishes its passion for open source (Summer of Code being another one of their programs). The program aims to bring technical writers and open source organizations together so that both mutually benefit — writers gain open source experience under the guidance of mentors, and organizations benefit from improved documentation. Projects ranged from beginner’s guides and tutorials to API and reference documentation. 

Season of Docs is aimed at early-career technical writers and I was fortunate it came my way just as I graduated from university looking for a career change. With some teaching experience in my arsenal, I was chuffed to learn that I was going to be working with Oppia — a learning platform focused on providing engaging content to students. I worked with Oppia on a standard length project (3 months) but longer-running projects (6 months) were also available — it all depends on the organization’s needs. 
Fun fact: Oppia is a Finnish word meaning ‘to learn’.

Meet the mentors

The period between I was first accepted into the program and the official start of the project was known as the ‘Community Bonding’ phase. This is how Google describes this phase:

Technical writers get to know mentors, get up to speed with the open source organization, and refine their projects in collaboration with mentors.

Sounds chill right? This is how my first meeting with Sean and Sandeep (my mentors at Oppia) went (and I’m obviously paraphrasing): 

Them: Hey, so awesome to meet you, congrats, this is gonna be great!

Me: OMG I’m looking forward to this!

Them: So we’re actually revamping the entire software and the proposal you wrote is kinda obsolete, so do you want to learn all the amazing new features in the next two weeks and rewrite the proposal? And feel free to suggest a hosting platform to us, and also can you do some hallway usability testing to get an idea of how users would like to access the docs?

Me: …. 

I personally called this the ‘Be cool and panic later’ phase. Of course, Sean and Sandeep were constantly available to assuage my fears answer any questions, so I never felt left in the lurch. 

Researching a hosting platform was actually fun. A few popped up in my online searches but most required heavy use of the command line which freaked me out enough, I ran away screaming like a banshee. I finally decided to go with Read the Docs as it is the largest open source hosting site in the world — so I figured it was worth checking out. 

Read the Docs generates documentation written in Sphinx. Thus far, my only association with that word was a certain statue in Egypt:

When I had no idea I would come across another Sphinx one day

I’ve since learned that Sphinx is a document generator used by the Python community. Writers use a lightweight markup language called reStructuredText (RST) or Markdown (or both!) to write the documentation. Of course I knew nothing about all of this at the time — and with that  lack of knowledge, the official start of the project began.

Stack Overflow to the rescue

Writing was painfully slow in the beginning as everything was unfamiliar — the language, the command line, the Sphinx… Having weekly milestones helped as I could narrow down my focus to the task at hand and not be overwhelmed with how much I didn’t know. Stack Overflow was a godsend and I realized there were other open source newbies who had asked the same questions I did. I think I did more Googling than writing that first month. Seriously, what did we do before the Internet?

As the weeks went by, I fell into a rhythm as I got more acquainted with the world of GitHub and submitted pull requests with increasing confidence. I started to speak funny like, ‘Hey Sean and Sandeep, I amended the commit on that pull request, can you PTAL?’ (please take a look, duh.) I practically threw myself a celebratory party the first time I used git commands without referring to my notes.

What makes Oppia unique is that the platform lets you create explorations (lessons) that replicate a one-on-one interactive tutoring scenario. Most of my work involved playing around with the new dashboards and features, creating video tutorials and writing up the user guide. Every week or so, I would submit my completed work as a pull request on GitHub. Weekly meetings with Sean and Sandeep also helped immensely as the frequent communication made me feel very supported. 

Building up Oppia’s documentation during Season of Docs

The second half of the project flew by and before I knew it, the guys at Google popped up again to sort out project finalization. Now Oppia’s software was still undergoing development and consequently there was more writing and video-making to be done — so we all agreed that this relationship was worth extending. Plus I was pretty excited about Oppia’s plans to have a wide international reach and I knew I wanted to stick around. So I barely noticed when Google announced the end of Season of Docs.

(Not) The end

Season of Docs is officially over, but my relationship with Oppia continues. I’ve since been introduced to other team members and I’m looking forward to contributing however I can. This to me is the major highlight of Season of Docs — the experience doesn’t have to end when the program does.

If you’re thinking of applying to Season of Docs in 2020***, go for it — and hopefully you will have an easier time explaining it to your folks.
Connect with me on www.techwritingmatters.com or on LinkedIn.

***The editor’s note: The application for Google Season of Docs 2020 is closed. Please visit their official website for more information.

About the author

Audrey Tavares

Audrey graduated from York University in 2019 with a certificate in Technical and Professional Communication. She is currently working as a technical writer at TAO Solutions Inc.

Why XML?

By Sowmya Sannaiah

Technical Communication professionals have been talking about authoring in XML for a very long time. XML, a cross-platform markup language, was initially designed to meet the challenges of large-scale e-publishing. Were the challenges met? Did XML succeed in exchanging a wide variety of data on the web? Let’s discuss.

So, what is XML?

Extensible Markup Language (XML) is a cross-platform, software and hardware independent markup language derived from Standard Generalized Markup Language (SGML). It is purely a text-based technology with a self-descriptive language. The data in XML is structured using meaningful tags to specify a given set of information.  It contains sender data, receiver data, heading, and a message body. You can add tag anytime to extend the content of the document thus making XML extensible.

An XML document also allows data storage in a format that can be interpreted by any computer system and hence it is used to transfer structured data between heterogeneous systems. It plays a very significant role in the movement of a wide variety of data on the Web.

XML is an international document standard created by the World Wide Web Consortium (W3C), an organization that is responsible for maintaining web standards.

Defining document content

While documenting in XML, you need to define the elements or define the structure that can appear in an XML document. This can be done by using DTD and/or XML Schema:

  • Document Type Definition (DTD): It describes the order in which the data should appear, how the data can be nested, and other basic details of XML document structure. DTD is part of the XML specification and works similar to SGML DTD.
  • XML Schema: A schema defines all the document structures that can be added in a DTD, defines the data types, and other advanced rules that a DTD cannot define.

An XML document with a DTD or an XML Schema is designed to be self-descriptive.

XML and HTML

We should note that XML is not a replacement of Hypertext Markup Language (HTML). XML and HTML were designed to achieve different goals.

XML was originally designed to describe, store, and transfer data. XML was not designed to display data. The tags are not predefined but are created by the author of the XML document. Whereas HTML was designed to display data in web browsers and the tags are predefined. When HTML is used to display data, the data is embedded in HTML format.

XML editor

The XML files can be created and edited using a simple text-editor like Notepad. But professional XML editors help you to write error-free XML documents, validate the XML against DTD or a schema, and ensure that you adhere to a valid XML structure.

An XML editor should be able to:

  • Automatically add closing tags to the opening tags
  • Validate XML code
  • Verify XML against DTD and Schema
  • Color code the XML syntax to increase readability

Advantages of XML

Some of the advantages offered by XML:

  • Human readable content: The tags, elements, and attributes in XML files are not only computer readable but also can easily be interpreted by humans. This is the greatest advantage for writers who have limited knowledge of programming languages.
  • Domain-specific vocabulary: As XML does not have any predefined tags, it allows the user to create tags based on the requirement of an application. In other words, XML allows domain-specific vocabulary per the need of the application without any restriction on the number of tags that can be defined.
  • Ease of data interchange between computer systems: XML provides the structure for storing data in text format. It is used as a popular standard format for data interchange. Thus, differences in how systems exchange data become insignificant. It produces files that are unambiguous, easy to generate, and easy to read.
  • Better search engine performance: The XML file creators can inform a search engine that the search needs to be performed within certain tags. This allows a focused search. Therefore, using XML ensures the precision of the search result that matches the search query.
  • Separation of contents and formats: XML allows the user to implement conditional formatting for an XML document. A separate style sheet is maintained to format the XML document. XML uses two types of style sheets, Cascading Style Sheet (CSS) and Extensible Style Language (XSL), for formatting data.  Because of this separation, it is easy to update and maintain the format of the document whenever required. Also, it is easy to maintain a consistent style sheet for all documents.
  • Granular updates: When data in an XML document needs to be updated, the entire page need not be reloaded from the server. Only the changed content is downloaded, thus making updates faster.
  • Flexibility: Writing an XML document is easy as compared to other markup languages. There are no predefined rules to follow, users can create their own tags and rules to serve their needs. So in terms of developing a document, XML is very flexible.
  • Multiple data types: XML documents can contain many data types which include multimedia data like image, sound, and videos. These multimedia data are embedded directly in an XML document as encoded text.
  • Ease of translation and publishing: When content is stored in XML tags, the cost of translations can be reduced by automation. It is much easier to translate an XML file because it allows content and format separation and it follows a rigorous standard and well defined syntax. Publishing the document in several languages can be done with a single click because formatting could be applied automatically while publishing source XML files.
  • Forward and backward compatibility: Forward or backward compatibility of XML files is relatively easy to implement: DTD and Schema allow tags to be defined as optional. As long as the newly added tags are descendants of the optional tag, the old and new versions are mutually compatible.

Why should we use XML in technical documentation?

A finished document can be assessed from two dimensions – effectiveness and efficiency. Effectiveness is whether the content clearly explains the product and the procedure to the reader. Efficiency is how quickly and efficiently the document was created.

We know that XML tags the elements of a document. For example, you use tags for a heading, a paragraph, and an item in a numbered list. As these tags help the rendering engine formats it appropriately for a wide range of output media, for content creators, the burden of formatting is reduced tremendously.  Since the flexible tagging scheme helps to define the content and improve readability, many software engineers have been increasingly using XML as a means to document computer programs. We, Technical Communicators, also adopt XML for the same reasons.

When XML is suitable?

XML is especially suitable for documents with complex structures. Darwin Information Type Architecture (DITA) standards is a niche XML that is widely adopted in technical communications. DITA standards are written for different industries and different document types. Although DITA allows you to customize the tags as per your required style, customization is time consuming and costly. For structured documents that need more flexibility, XML is a good alternative.

Wide adoption of XML

As per W3C, XML is one of the world’s most widely-used formats for representing and exchanging information.

XML helps to represent, process, and exchange information with robustness and efficiency. Hence XML is heavily used as a format for document storage and processing, both online and offline.

Today, XML not only works with documents but also works with JSON, linked data, large databases (both SQL/relational and NoSQL), the Internet of Things (IoT), music players, in automobiles and aircraft industries. It is found almost everywhere.

Future prospects of XML in documentation

XML is a simple and very flexible markup language which will be a great foundation for many standards yet to come. XML also provides a common language that different computer system can use to exchange data with one another. Even when each industry group comes up with new standards for what it wants to communicate, computers can still exchange data with minimal barriers.

About the Author:

Sowmya Sannaiah

Sowmya is a Technical Communicator with experience in IT documentation.

LinkedIn: https://www.linkedin.com/in/sowmya-sannaiah/

Blog: https://sowmyasannaiah.wordpress.com/


STC Toronto Members Won 2020 Distinguished Chapter Service Award

Congratulations to our very own Vanitha Krishnamurthy and Mona Albano for winning this year’s STC Distinguished Chapter Service Award!

The STC Community Affairs Committee gives out this award every year to honor the dedication and hardworks of STC’s community leaders. It is the highest recognition that a STC member can receive.

Mona Albano is currently serving as the Community Director
of the STC Toronto Chapter
Vanitha Krishnamurthy is currently serving as the Treasurer of the STC Toronto Chapter

STC Toronto AGM 2020

STC Toronto chapter’s Annual General Meeting (AGM) was held successfully via Zoom on June 28th, 2020 amidst the COVID-19 pandemic.

The president and vice president of the STC Toronto chapter Joyce Lam and Shonna Eden co-chaired the meeting.

Looking back on 2019-2020

At the beginning of the meeting, Shonna presented the 2019 meeting minutes and looked back on events and activities held last year. The treasurer Vanitha Krishnamurthy then reviewed the chapter’s finances. Joyce followed by presenting the retrospection and outlook of 2019-2020. The focus of the council’s work was staying relevant, providing value for members by boosting the new job bank, increasing marketing effort and event attendance through strengthened online presence. Fostering the next generation of leadership for the chapter was also a top priority. Joyce also touched upon the challenges and opportunities presented by COVID-19.

Electing community council and executive council

After the retrospection, attending members elected the the new community council and Executive council.

The 2020-2021 community council

Communication ManagerNatalya Lohnes
Communications CoordinatorKahkashan Kazmi
Membership ManagerTika Thapa
Program ManagerPhoebe Yu
Education ManagerTania Samsonova
Publicity ManagerVacant
WebmasterEgnis Hoxha
Blog ManagerPeihong Zhu
Content WriterCraig Dodman
Site ManagerManali Pandit
Job Bank MangerAnant Seshardri
Job Bank CoordinatorsKahkashan Kazmi, Michael Fowler

The 2020-2021 executive council

PresidentJoyce Lam
Vice PresidentShonna Eden
TreasurerVanitha Krishnamurthy
Community DirectorMona Albano

Presenting 2020-2021 action plan

Following the election, Shonna presented the action plan for 2020-2021, which includes:

  • Blend social and networking events with workshops and talks
  • Increase web presence of events
  • Complete the Job Bank renovation
  • Build partnership with recruitment agencies and colleges
  • Hold job fairs and career development workshops

Awards

After the formal agenda concluded, Joyce announced the various chapter awards given out at the end of 2019. Joyce also congratulated the Community Director Mona Albano and Treasurer Vanitha Krishnamurthy for receiving this year’s Community Service Awards from the STC head office.

Keynote speech

For the grand finale of this year’s AGM, the renowned Canadian English expert and the former Editor-in-Chief of the Canadian Oxford Dictionary Katherine Barber gave the key-note speech on the history of the English language titled Why is the English Language so Weird? Filled with stories and fun facts, the talk garnered rave reviews from the attendees in the form of chat messages.

If you want to know more about Kathrine Barber’s work, please visit her blog: https://katherinebarber.blogspot.com/.

Press Enter to Continue

Book Review by Jane Aronovitch

Press Enter to Continue is the clever title of a book written by Joan Francuz, a friend, colleague, former STCer and now author—in her own right, for what do we do, in our profession, but write books, tomes, pages, testaments and scripts of various sorts for our clients, just not for the general public in most cases.

When Joan autographed my copy of the book, she added “…and Ctrl S to save” after the title. That’s quintessential Joan Francuz and gives a taste of her witty style. While the book is historical—comprehensively so, but distilled to perfection—it is also chatty and full of personal treasures and stories too. In fact the book reads like Joan is talking to you, which makes it even more engaging. All of which makes this history of writing through the ages both personal and universal, as Joan discovers and exemplifies this “character trait—some call it a flaw—that compels people throughout history to sit down and write everything they know.”

And so it is that we learn of Joan’s love of gardening, her family, various jobs, travels and homes. But the meat of the book is the wealth of information on how scribes came to be and how they fashioned and used the tools of their trade through the ages.

With Joan’s deft touch and skill it all comes to life—from the Sumerians, to the Greeks, Romans, so-called Barbarians, and Renaissance men (I’m sure they had women too!); from symbols to characters to alphabets (first uppercase only, then lowercase) and the introduction of numbers and publishing; from all of these to the effects of religion, commerce, patents, railroads and more, including telegraphy and photography and how they influenced the dissemination of information. And all the while Joan relates to the material anecdotally, personally or with modern day comparisons. This is no dry history text!

A section entitled “Then people became data” introduces the digital age and brings us to “the time called now.” The book concludes with the thought that in every age people had a need to document the world around them. Press Enter to Continue carries on this tradition in exemplary fashion. It is a well researched piece of work documenting the history of our profession, with a bit of humour and personality thrown in for good measure. It is well worth the read!