OpenText Enterprise World 2019 day 2: technology keynote

We started day 2 of OpenText Enterprise World with a technology keynote by Muhi Majzoub, EVP of Engineering. He opened with a list of their major releases over the last year. He highlighted the upcoming shift to cloud-first containerized deployments of the next generation of their Release 16 that we heard about in Mark Barrenechea’s keynote yesterday, and described the new applications that they have created on the OT2 platform.

We heard about and saw a demo of their Core for Federated Compliance, which allows for federated records and retention management across CMS Core, Content Suite and Documentum repositories, with future potential to connect to other (including non-OpenText) repositories. I’m still pondering the question of when they might force customers to migrate off some of the older platforms, but in the meantime, the content compliance and disposition can be managed in a consolidated manner.

Next was a demo of Documentum D2 integrated with SAP — this already existed for their other content products but this was a direct request from customers — allowing content imported into D2 to support transactions such as purchase orders to be viewed from a Smart View by an SAP user as related documents. They have a strong partnership with SAP, providing enterprise-scale content management as a service on the SAP cloud, integrated with SAP S/4HANA and other applications. They are providing content management as OT2-based microservices, allowing content to be integrated anywhere in the SAP product stack.

AppWorks also made an appearance: this is OpenText’s low-code application development platform that also includes their process management capabilities. They have new interfaces for developers and users, including better mobile applications. No demo, however; given that I missed my pre-conference briefing, I’ll have to wait until later today for that.

Majzoub walked through the updates of many of the other products in their portfolio: EnCase, customer experience management, AI, analytics, eDocs, Business Network and more. They have such a vast portfolio that there are probably few analysts or customers here that are interested in all of them, but there are many customers that use multiple OpenText products in concert.

He finished up with more on OT2, positioning it as a platform and repository of services for building applications in any of their product areas. These services can be consumed by any application development environment, whether their AppWorks low-code platform or more technical development tools such as JAVA. An interesting point made in yesterday’s keynote challenges the idea of non-technical users as “citizen developers”: they see low-code as something that is used by [semi-]technical developers to build applications. The reality of low-code may finally be emerging.

They are featuring six new cloud-based applications built on OT2 that are available to customers now: Core for Capital Projects, Core for Supplier Exchange, Core Enhances Integration with CSP, Core Capture, Core for SAP SuccessFactors, and Core Experience Insights. We saw a demo that included the Capital Projects and Supplier Exchange applications, where information was shared and integrated between a project manager on a project and a supplier providing documentation on proposed components. The Capital Projects application includes analytics dashboards to track progress on deliverables and issues.

Good start to the day, although I’m looking forward to more of a technical drill-down on AppWorks and OT2.

OpenText Enterprise World 2019 day 1 keynote

OpenText is holding their global Enterprise World back in Toronto for the third year in a row (meaning that they’ll probably move on to another city for next year — please not Vegas) and I’m here for a couple of days for briefings with the product teams and to sit in on some of the sessions.

I attended a session earlier on connecting content and process that was mostly market research presented by analysts John Mancini and Connie Moore — some interesting points from both of them — before going to the opening keynote with CEO/CTO Mark Barrenechea and a few guests including Sir Tim Berners-Lee.

Barrenechea started with some information about where OpenText is at now, including their well-ranked positions in analyst rankings for content services platforms (Content Services), supply chain commerce networks (Business Network) and digital process automation (AppWorks). He believes that we’re “beyond digital”, with a focus on information rather than automation. He announced cloud-first versions of their products coming in April 2020, although some products will also be available on premise. Their OT2 Cloud Platform will be sold on a service model; I’m not sure if it’s a full microservice implementation, but it sounds like it’s at least moving in that direction. They’ve also announced a new partnership with Google, with Google Cloud being their preferred platform for customers and the integration of Google Services (such as machine learning) into OpenText EIM; this is on a similar scale to what we’ve seen between Alfresco and Amazon AWS.

The keynote finished with a talk by Sir Tim Berners-Lee, inventor of the World Wide Web, on how the web started, how it’s now used and abused, and what we all can do to make it better.

Cloud architecture panel at bpmNEXT 2019

As a twist on the usual bpmNEXT format, we heard from a panel of the demo participants: Michael Lim of IBM, Philippe Laumay of Bonitasoft and Phil Simpson of Red Hat. A few notes from the panel – no attribution of any specific comments but you can likely make some guesses – of what vendors are facing with cloud architectures.

  • Platform architecture needs to have cloud-level scalability through containerization
  • Cloud is pushing vendors from a monolithic BPMS platform to a microservice architecture for elasticity
  • A “boil the ocean” digital business monolithic platform doesn’t make sense, but better to provide easily-consumable services on a pay-per-use basis
  • Services are assembled into solutions but may be guided by a platform strategy to know what will work well together
  • A single-vendor platform requires pricing for only the components used
  • Monolithic platforms provide a common data model used by a single vendor’s tools for better application of machine learning and AI to the data
  • Low-code application development, solution accelerators or partner-created vertical solutions are required to sell the cloud platform
  • Cloud microservice architecture enables collaboration between vendors and customers in more of an open source model
  • Picking the right best-of-breed service for your use case can be a competitive differentiator
  • Systems integrators are going to shift to more of a consulting role to focus on best practices (including which service to pick for which application) rather than building solutions
  • Vendors can help to build relationships between partners with complementary skills to build solutions together
  • Cloud doesn’t necessarily mean that you don’t know where the data is (e.g., hybrid cloud), just that it is managed in a consistent fashion and transparent to the users
  • Capture (from physical documents/objects) is one area where physical location is particularly relevant since the physical documents will be stored somewhere for a period of time
  • BPMN isn’t necessarily used for end-to-end modeling of executable processes, since that implies orchestration at that level; at the high level, it is more commonly used to model milestones and business behaviors

In summary: cloud and microservices are good, but the single-vendor platform versus best-of-breed services is still up for debate.

Now, on to the demos.

Show me the money: Financials, sales and support at @OpenText Analyst Summit 2019

We started the second day of the OpenText Analyst Summit 2019 with their CFO, Madhu Ranganathan, talking about their growth via acquisitions and organic growth. She claimed that their history of acquisitions shows that M&A does work — a point with which some industry specialists may not agree, given the still overlapping collection of products in their portfolio — but there’s no doubt that they’re growing well based on their six-year financials, across a broad range of industries and geographies. She sees this as a position for continuing to scale to $1B in operating cash flow by June 2021, an ambitious but achievable target, on their existing 25-year run.

Ted Harrison, EVP of Worldwide Sales, was up next with an update on their customer base: 85 of the 100 largest companies in the world, 17 of the top 20 financial services companies, 20 of the top 20 life sciences companies, etc. He walked through the composition of the 1,600 sales professionals in their teams, from the account executives and sales reps to the solution consultants and other support roles. They also have an extensive partner channel bringing domain expertise and customer relationships. He highlighted a few customers in some of the key product areas — GM for digital identity management, Nestle for supply chain management, Malaysia Airports for AI and analytics,and British American Tobacco for SuccessFactors-OT2 integration — with a focus on customers that are using OpenText in ways that span their business operations in a significant way.

James McGourlay, EVP of Customer Operations, covered how their global technical support and professional services organization has aligned with the customer journey from deployment to adoption to expansion of their OpenText products. With 1,400 professional services people, they have 3,000 engagements going on at any given time across 30 countries. As with most large vendors’ PS groups, they have a toolbox of solution accelerators, best practices, and expert resources to help with initial implementation and ongoing operations. This is also where they partner with systems integrators such as CGI, Accenture and Deloitte, and platform partners like Microsoft and Oracle. He addressed the work of their 1,500 technical support professionals across four major centers of excellence for round-the-clock support, co-located with engineering teams to provide a more direct link to technical solutions. They have a strong focus on customer satisfaction in PS and technical support because they realize that happy customers tend to buy more stuff; this is particularly important when you have a lot of different products to sell to those customers to expand your footprint within their organizations.

Good to hear more about the corporate and operations side than I normally cover, but looking forward to this afternoon’s deeper dives into product technology.

Product Innovation session at @OpenText Analyst Summit 2019

Muhi Majzoub, EVP of Engineering, continued the first day of the analyst summit with a deeper look at their technology progress in the past year as well as future direction. I only cover a fraction of OpenText products; even in the ECM and BPM space, they have a long history of acquisitions and it’s hard to keep on top of all of them.

Their Content Services provides information integration into a variety of key business applications, including Salesforce and SAP; this allows users to work in those applications and see relevant content in that context without having to worry where or how it’s stored and secured. Majzoub covered a number of the new features of their content platforms (alas, there are still at least two content platforms, and let’s not even talk about process platforms) as well as user experience, digital asset management, AI-powered content analytics and eDiscovery. He talked about their solutions for LegalTech and digital forensics (not areas that I follow closely), then moved on to the much broader areas of AI, machine learning and analytics as they apply to capture, content and process, as well as their business network transactions.

He talked about AppWorks, which is their low-code development environment but also includes their BPM platform capabilities since they have a focus on process- and content-centric applications such as case management. They have a big push on vertical application development, both in terms of enabling it for their customers and also for building their own vertical offerings. Interestingly, they are also allowing for citizen development of micro-apps in their Core cloud content management platform that includes document workflows.

The product session was followed by a showcase and demos hosted by Stephen Ludlow, VP of Product Marketing. He emphasized that they are a platform company, but since line-of-business buyers want to buy solutions rather than platforms, they need to be able to demonstrate applications that bring together many of their capabilities. We had five quick demos:

  • AI-augmented capture using Captive capture and Magellan AI/analytics: creating an insurance claim first notice of loss from an unstructured email, while gathering aggregate analytics for fraud detection and identifying vehicle accident hotspots.
  • Unsupervised machine learning for eDiscovery to identify concepts in large sets of documents in legal investigations, then using supervised learning/classification to further refine search results and prioritize review of specific documents.
  • Integrated dashboard and analytics for supply chain visibility and management, including integrating, harmonizing and cleansing data and transactions from multiple internal and external sources, and drilling down into details of failed transactions.
  • HR application integrating SAP SuccessFactors with content management to store and access documents that make up an employee HR file, including identifying missing documents and generating customized documents.
  • Dashboard for logging and handling non-conformance and corrective/preventative actions for Life Sciences manufacturing, including quality metrics and root cause analysis, and linking to reference documentation.

Good set of business use cases to finish off our first (half) day of the analyst summit.

Snowed in at the @OpenText Analyst Summit 2019

Mark Barrenechea, OpenText’s CEO and CTO, kicked off the analyst summit with his re:imagine keynote here in Boston amidst a snowy winter storm that ensures a captive audience. He gave some of the current OpenText stats –100M end users over 120,000 customers, 2.8B in revenue last year — before expanding into a review of how the market has shifted over the past 10 years, fueled by changes in technology and infrastructure. What’s happened on the way to digital and AI is what he calls the zero theorem: zero trust (guard against security and privacy breaches), zero IT (bring your own device, work in the cloud), zero people (automate everything possible) and zero down time (everything always available).

Their theme for this year is to help their customers re:imagine work, re:imagine their workforce, and re:imagine automation and AI. This starts with OpenText’s intelligent information core (automation, AI, APIs and data management), then expands with both their EIM platforms and EIM applications. OpenText has a pretty varied product portfolio (to say the least) and is bringing many of these components together into a more cohesive integrated vision in both the content services and the business network spaces. More importantly, they are converging their many, many engines so that in the future, customers won’t have to decide between which ECM or BPM engine, for example.

They are providing a layer of RESTful services on top of their intelligent information core services (ECM, BPM, Capture, Business Network, Analytics/AI, IoT), then allow that to be consumed either by standard development tools in a technical IDE, or using the AppWorks low-code environment. The Cloud OT2 architecture provides about 40 services for consumption in these development environments or by OpenText’s own vertical applications such as People Center.

Barrenechea finished up with a review of how OpenText is using OpenText to transform their own business, using AI for looking at some of their financial and people management data to help guide them towards improvements. They’ll be investing $2B in R&D over the next five years to help them become even bigger in the $100B EIM market, both through the platform and more increasingly through vertical applications.

We’ll be digging into more of the details later today and tomorrow as the summit continues, so stay tuned.

Next up was Ted Harrison, EVP of Worldwide Sales, interviewing one of their customers: Gopal Padinjaruveetil, VP and Chief Information Security Officer at The Auto Club Group. AAA needs no introduction as a roadside assistance organization, but they also have insurance, banking, travel, car care and advocacy business areas, with coordinated member access to services across multiple channels. It’s this concept of the connected member that has driven their focus on digital identity for both people and devices, and how AI can help them to reduce risk and improve security by detecting abnormal patterns.

Integrating your enterprise content with cloud business applications? I wrote a paper on that!

Just because there’s a land rush towards SaaS business applications like Salesforce for some of your business applications, it doesn’t mean that your content and data are all going to be housed on that platform. In reality, you have a combination of cloud applications, cloud content that may apply across several applications, and on-premise content; users end up searching in multiple places for information in order to do a single transaction.

In this paper, sponsored by Intellective (who have a bridging product for enterprise content/data with SaaS business applications), I wrote about some of the architecture and design issues that you need to consider when you’re linking these systems together. Here’s the introduction:

Software-as-a-service (SaaS) solutions provide significant utility and value for standard business applications, including customer relationship management (CRM), enterprise resource planning (ERP), supply chain management (SCM), human resources (HR), accounting, insurance claims management, and email. These “systems of engagement” provide a modern and agile user experience that guides workers through actions and enables collaboration. However, they rarely replace the core “systems of record”, and don’t provide the range of content services required by most organizations.

This creates an issue when, for example, a customer service worker’s primary environment is Salesforce CRM, but for every Salesforce activity they may also need to access multiple systems of record to update customer files, view regulatory documentation or initiate line-of-business (LOB) processes not supported in Salesforce. The worker spends too much time looking for information, risks missing relevant content in their searches, and may forget to update the same information in multiple systems.

The solution is to integrate enterprise content from the systems of record – data, process and documents – directly with the primary user-facing system of engagement, such that the worker sees a single integrated view of everything required to complete the task at hand. The worker completes their work more efficiently and accurately because they’re not wasting time searching for information; data is automatically updated between systems, reducing data entry effort and errors.

Head on over to get the full paper (registration required).

AlfrescoDay 2018: digital business platform and a whole lot of AWS

I attended Alfresco’s analyst day and a customer day in New York in late March, and due to some travel and project work, just finding time to publish my notes now. Usually I do that while I’m at the conference, but part of the first day was under NDA so I needed to think about how to combine the two days of information.

The typical Alfresco customer is still very content-centric, in spite of the robust Alfresco Process Services (formerly Activiti) offering that is part of their platform, with many of their key success stories presented at the conference were based on content implementations and migrations from ECM competitors such as Documentum. In a way, this is reminiscent of the FileNet conferences of 20 years ago, when I was talking about process but almost all of the customers were only interested in content management. What moves this into a very modern discussion, however, is the focus on Alfresco’s cloud offerings, especially on Amazon AWS.

First, though, we had a fascinating keynote by Sangeet Paul Choudary — and received a copy of his book Platform Scale: How an emerging business model helps startups build large empires with minimum investment — on how business models are shifting to platforms, and how this is disrupting many traditional businesses. He explained how supply-side economies of scale, machine learning and network effects are allowing online platforms like Amazon to impact real-world industries such as logistics. Traditional businesses in telecom, financial services, healthcare and many other verticals are discovering that without a customer-centric platform approach rather than a product approach, they can’t compete with the newer entrants into the market that build platforms, gather customer data and make service-based partnerships through open innovation. Open business models are particularly important, and striking the right balance between an open ecosystem and maintaining control over the platform through key control points. He finished up with a digital transformation roadmap: gaining efficiencies through digitization; then using data collected in the first stage while integrating flows across the enterprise to create one view of the ecosystem; and finally externalizing and harnessing value flows in the ecosystem. This last stage, externalization, is particularly critical, since opening the wrong control points can kills you business or stifle open growth.

This was a perfect lead-in to Chris Wiborg’s (Alfresco’s VP of product marketing) presentation on Alfresco’s partnership with Amazon and the tight integration of many AWS services into the Alfresco platform: leveraging Amazon’s open platform to build Alfresco’s platform. This partnership has given this conference in particular a strong focus on cloud content management, and we are hearing more about their digitial business platform that is made up of content, process and governance services. Wiborg started off talking about the journey from (content) digitization to digital business (process and content) to digital transformation (radically improving performance or reach), and how it’s not that easy to do this particularly with existing systems that favor on-premise monolithic approaches. A (micro-) service approach on cloud platforms changes the game, allowing you to build and modify faster, and deploy quickly on a secure elastic infrastructure. This is what Alfresco is now offering, through the combination of open source software, integration of AWS services to expand their portfolio of capabilities, and automated DevOps lifecycle.

This brings a focus back to process, since their digital business platform is often sold process-first to enable cross-departmental flows. In many cases, process and content are managed by different groups within large companies, and digital transformation needs to cut across both islands of functionality and islands of technology.

They are promoting the idea that differentiation is built and not bought, with the pendulum swinging back from buy toward build for the portions of your IT that contribute to your competitive differentiation. In today’s world, for many businesses, that’s more than just customer-facing systems, but digs deep into operational systems as well. In businesses that have a large digital footprint, I agree with this, but have to caution that this mindset makes it much too easy to go down the rabbit hole of building bespoke systems — or having someone build them for you — for standard, non-differentiating operations such as payroll systems.

Alfresco has gone all-in with AWS. It’s not just a matter of shoving a monolithic code base into a Docker container and running it on EC2, which how many vendors claim AWS support: Alfresco has a much more integrated microservices approach that provides the opportunity to use many different AWS services as part of an Alfresco implementation in the AWS Cloud. This allows you to build more innovative solutions faster, but also can greatly reduce your infrastructure costs by moving content repositories to the cloud. They have split out services such as Amazon S3 (and soon Glacier) for storage services, RDS/Aurora for database services, SNS for notification, security services, networking services, IoT via Alexa, Rekognition for AI, etc. Basically, a big part of their move to microservices (and extending capabilities) is by externalizing to take advantage of Amazon-offered services. They’re also not tied to their own content services in the cloud, but can provide direct connections to other cloud content services, including Box, SharePoint and Google Drive.

We heard from Tarik Makota, an AWS solution architect from Amazon, about how Amazon doesn’t really talk about private versus public cloud for enterprise clients. They can provide the same level of security as any managed hosting company, including private connections between their data centers and your on-premise systems. Unlike other managed hosting companies, however, Amazon is really good at near-instantaneous elasticity — both expanding and contracting — and provides a host of other services within that environment that are directly consumed by Alfresco and your applications, such as Amazon RDS for Aurora, a variety of AI services, serverless step functions. Alfresco Content Services and Process Services are both available as AWS QuickStarts, allowing for full production deployment in a highly-available, highly-redundant environment in the geographic region of your choice in about 45 minutes.

Quite a bit of food for thought over the two days, including their insights into common use cases for Alfresco and AI in content recognition and classification, and some of their development best practices for ensuring reusability across process and content applications built on a flexible modern architecture. Although Alfresco’s view of process is still quite content-centric (naturally), I’m interested to see where they take the entire digital business platform in the future.

Also great to see a month later that Bernadette Nixon, who we met at the Chief Revenue Officer at the event, has moved up to the CEO position. Congrats!

bpmNEXT 2018: Last session with a Red Hat demo, Serco presentation and DMN TCK review

We’re on the final session of bpmNEXT 2018 — it’s been an amazing three days with great demos and wonderful conversations.

Exploiting Cloud Infrastructure for Efficient Business Process Execution, Red Hat

Kris Verlaenen, project lead for jBPM as part of Red Hat, presented on cloud BPM infrastructure, specifically for execution and monitoring. Cloud makes BPM lightweight, scalable, embedable and able to take advantage of the larger cloud app ecosystem. They are introducing some new cloud infrastructure, including a controller for managing server deployments, a smart router for delegating and aggregating requests from applications to servers, and monitoring that aggregates process statistics across servers and containers. The demo showed using Red Hat’s OpenShift container application platform (actually MiniShift running on his laptop) to create a new environment and deploy an IT hardware ordering BPM application. He walked through using the application to create a new order and see the milestone-based monitoring of the order, then the hardware provider’s view of their steps in the process to provide information and advance the process to the next stage. The process engine and monitoring engine can be deployed in different containers on different hardware, in any combination of cloud providers and on-premise infrastructure. Applications and servers can be bundled into a single immutable image for easy provisioning — more of a microservices style — or can be deployed independently. Multiple versions of the same application can be deployed, allowing current instances to play out in the original version while new instances use the most recent version, or other strategies that would allow new instances of any version to be created, while monitoring can aggregate instance data from all versions in all containers.

Kris is also live-blogging the conference, check out his posts. He has gone back and included the video of each presentation when they are released (something that I didn’t do for page load performance reasons) as well as providing his commentary on each presentation.

Dynamic Work Assignment, Serco

Lloyd Dugan of Serco has the unenviable position of being the last presenter of the conference, although he gave a presentation of dynamic work assignment implementation rather than an actual demo (with a quick view of the simple process model in the Trisotech animator near the end, plus an animation of the work assignment in action). His company is a call center business process outsourcer, where knowledge workers use a case management application implemented in BPMN, driven by events such as inbound calls and documents, as well as timers. Real-time work prioritization and assignment is necessary because of SLAs around inbound calls, and the task management model is moving from work being selected (and potentially cherry-picked) by workers, to push assignments. Tasks are scored and assigned using decision models that include task type and SLAs, and worker eligibility based on each individual’s skills and training. Although work assignment products exist, this one is specifically for the complex rules around the US Affordable Care Act administration, which requires a combination of decision tables, database table-driven rules, and lower-level coding to provide the right combination of flexibility and performance.

DMN TCK (Technical Compatibility Kit) Working Group

Keith Swenson of Fujitsu (but presenting here in his role on the DMN standards) started on the idea of a set of standardized DMN technical compatibility tests based on conversations at bpmNEXT in 2016, and he presented today on where they’re at with the TCK. Basically, the TCK provides a way for DMN vendors to demonstrate their compliance with the standard by providing a set of DMN models, input data, and expected results, testing decision tables, boxed expressions and FEEL. Vendors who can demonstrate that they pass all of the TCK tests are listed on a github site along with information about individual test results, providing a way for DMN customers to assess the compliance level of vendors. Keith wrote an update on this last September that provides a good summary up to that point, and in today’s presentation he walked through some of the additional things that they’ve done including identifying sections of the DMN specification that require clarifications or additions due to ambiguity that can lead to different implementations. DMN 1.2 is coming out this year, which will require a new set of tests specifically for that version while maintaining the previous version tests; they are also trying to improve testing of error cases and introducing more real-world decision models. If you create and use DMN models, or make a DMN-compliant decision management product, or you’re otherwise interested in the DMN TCK, you can find out here how to get involved in the working group.

That’s it for bpmNEXT 2018. There will be voting for the best in show and some wrapup after lunch, but we’re pretty much done for this year. Another amazing year that makes me proud to be a part of this community.

Anarchy in Edmonton: no, it’s not hockey, it’s Google Drive

I’m in a breakout session at the AIIM 2018 conference, and Kristan Cook and Gina Smith-Guidi are talking about their work at the City of Edmonton in transitioning from network drives to Google Drive for their unstructured corporate information. Corporate Records and Information Management (CRIM) is part of the Office of the City Clerk, and is run a bit independently of IT and in a semi-decentralized manner. They transitioned from Microsoft Office to Google Suite in 2013, and wanted to apply records management to what they were doing; at that time, there was nothing commercially available, so hired a Google Apps developer to do it for them. They needed the usual records management requirements: lifecycle management, disposition and legal hold reporting, and tools to help users to file in the correct location; on top of that, it had to be easy to use and relatively inexpensive. They also managed to reconcile over 2000 retention schedules into one master classification and retention schedule, something that elicited gasps from the audience here.

What they offer to the City departments is called COE Drive, which is a functional classification — it just appears as a folder in Google Drive — then the “big bucket” method below that top level, where documents are filed within a subfolder that represents the retention classification. When you click New in Google Drive, there’s a custom popup that asks for the primary classification and secondary classification/record series, and a subfolder within the secondary classification. This works for both uploaded files and newly-created Google Docs/Sheets files. Because these are implemented as folders in Google Drive, access permissions are applied so that users only see the classifications that apply to them when creating new documents. There’s also a simple customized view that can be rolled out to most users who only need to see certain classifications when browsing for documents. Users don’t need to know about retention schedules or records management, and can just work the way that they’ve been working with Google Drive for five years with a bit of a helper app to help them with filing the documents. They’re also integrating Google File Stream (the sync capability) for files that people work on locally on their desktop, to ensure that they are both backed up and stored as proper records if required.

The COE Drive is a single account drive, I assume so that documents added to the COE Drive have their ownership set to the COE Drive and are not subject to individual user changes. There’s not much metadata stored except for the date, business area and retention classification; in my experience with Google Drive, the search capabilities mean that you need much less explicit metadata.

It sounds as if most of the original work was done by a single developer, and now they have new functionality created by one student developer; on top of that, since it’s cloud-based, there’s no infrastructure cost for servers or software licences, just subscription costs for Google Apps. They keep development in-house both to reduce costs and to speed deployment. Compare the chart on the right with the cost and time for your usual content and records management project — there are no zeros missing, the original development cost was less than $50k (Canadian). That streamlined technology path has also inspired them to streamline their records management policies: now, changes to the retention schedule that used to require a year and five signatures can now be signed off by the City Clerk alone.

Lots of great discussion with the audience: public sector organizations are very interested in any solution where you can do robust content and records management using low-cost cloud-based tools, but many private sector companies are seeing the benefits as well. There was a question about whether they share their code: they don’t currently do that, but don’t have a philosophical problem with doing that — watch for their Github to pop up soon!