Jump to content

Offline content generator/Architecture

From mediawiki.org

General Overview

[edit]
Proposed deployment architecture at the Wikimedia Foundation.

As shown in the diagram; MediaWiki sits between the render servers and the public internet. The collection extension is the portal to the backend, and it acts in a 'render, render status, download document' workflow. If it determines a document needs to be rendered, it can push the new job to any render server which will in turn push it to a queue in redis for eventual pick up. Status updates are obtained by again querying any server which will retrieve the status from redis.

Render servers have three main processes: a frontend, render client, and a garbage collector. The frontend is a HTTP server and is the public interface of the server. Render clients are what do the actual work and opportunistically pick up jobs from Redis. Finally the garbage collector picks up after failed jobs -- marking the status failed in redis and cleaning up the local scratch space.

To do the actual work, a render client will take a job out of the Redis FIFO queue. A bundle file and a final rendered document will be produced, both of which will be stored in Swift temporarily, a couple days, on successful job completion. The bundle file is stored in case another render job appears for a different final format but same content.

At any time after the work has been completed, and the file has not expired from Redis, the frontend can be instructed to stream the file though MediaWiki and down to a user. The file is served from MediaWiki with cache control headers so that it can be stored for a longer term in Varnish.

As all jobs have a unique hash (Collection ID) created from, among other things, the article revision IDs cache invalidation happens automatically on new requests with new text content. However, changes to templates, images, or any other change to text content that do not update revision IDs will not get a new hash and thus will not be re-rendered on request unless a manual purge is issued.

The Render Server (a.k.a offline content generator)

[edit]

The render server hosts a Node.JS process which will fork itself several times to spawn sub components. The initial thread is kept as a coordinator and is capable of restarting threads on demand, of if any thread unexpectedly dies. It can be run standalone, logging to the console, or as a service with logs routed to syslog.

Render Frontend

[edit]

The frontend is a HTTP server capable of accepting new jobs, obtaining status updates of pending and running jobs, and streaming final rendered content back to the requester.

API (command=?)

[edit]
  • render Places new jobs (and the job metadata) into Redis.
  • download Streams a completed document to mediawiki, the response contains only the document itself. Headers such as cache-control must be added by MediaWiki. The response may be a HTTP 302 to a server that can access the document if the local server cannot.
  • render_status Queries the redis server for the current status of the job.
  • zip_post HTTP POST the intermediate ZIP file to an external server. This is a legacy command supported by mwlib to push prefetched collections to external render services. Though our intermediate format has changed we will support this because it could have future uses.
  • health_check Ensure that the server is still responsive over HTTP.

Render Client

[edit]

The render pipeline has three broad stages; getting the job from redis, spidering the site to produce an intermediate file with all resources, and then rendering the output.

  • Takes jobs when free from Redis
  • Spidering
    • Pulls each title from Parsoid
    • Process all downloaded RDF for external resources like images
  • Rendering
    • Process the RDF as required for output format
    • Runs pages through compositor like latex/phantomJS producing intermediate pages
    • Perform final compositing of all parts (add title page, table of contents, page numbers, merging intermediates, etc)
    • Saves the final file into a local/remote disk
    • Updates the redis entry for the job when complete and in progress

Garbage Collector

[edit]

Every so often

  • Go through all keys in the redis server and remove old jobs / files (older than 7 days?)
  • Also clean up intermediate results and output PDFs?

Redis Server

[edit]

There are three classes of objects that will be stored in Redis: a FIFO job pending list, job status objects, and collection metadata (metabook) objects. Jobs are inserted into Redis in such a way that no contention may happen - a WATCH is issued on the job status object before insertion. Once in Redis, typically only the client responsible for the job may modify the status object. Edge cases are garbage collector cleanup in case of abnormal job termination and injection of new render jobs using the same metadata with a different renderer.

Key Names

[edit]

Primary redis key names are in the form: ocg-collection-<CollectionID>

Pending Queue

[edit]

The pending queue is actually a redis list structure. Pending jobs, identified by their collection ID are entered into the list via a LPUSH command. Jobs are removed using RPOP commands. Both of these are atomic.

Job Status Objects

[edit]

Job status are represented by redis hash objects. The primary key is the collection id, and the hash entry is the renderer. The object contained inside is a JSON blob with an expected size of less than 1 KB. Some of the information contained in each blob is the current owner of the job, a textual / numeric percentage status of a running job, and the location on disk of all final products.

These status objects will be kept with a timeout equal to or greater than the longest external document cache timeout. Redis only allows key expiry on the entire hash, thus the longest lived external document will determine the expiry of the key.

Collection Metadata (Metabook) Object

[edit]

Metabook objects are kept in the same job status hash object, identified by the key metabook. This is a JSON blob of variable size, with a typical size of less than 10 KB. This object however will be manually deleted by the final render thread so that these potentially large objects are not kept around for long periods.

Notes on Production Operations

[edit]

Load Balancing

[edit]

As each render frontend may respond to any request, it is possible to use LVS to distribute load and react to downtime.

Caching

[edit]

There are multiple levels of caching in this solution. The backend level, for which the WMF will be using Swift with object expiry is merely to get the document off the server into a shared space awaiting pickup by the user who requested the document. Once the document is requested, the collection extension / MediaWiki will stream the object to the user with cache-control headers so that varnish can cache the response for a longer time.

When render jobs are requested, the frontend will issue HTTP HEAD requests to Varnish and Swift in that order to determine if objects are already in cache before issuing new render requests.

Multiple Data Centers

[edit]

No explicit internal support is built in at this time. If redis is available cross data centers though it would be possible to redirect users to the datacenter where their data file is located, and it will probably be possible to run render jobs in the data center originally requested if we have multiple job queues.

Redis Sharding

[edit]

It is not expected that we will have large enough objects in cache to require sharding (initial estimates are 10 GB in cache at any given time), and no support has yet been built in for this purpose. However, if it becomes required, it should be somewhat trivial to accomplish; though preferably we would export the requirement to a redis proxy (like twemproxy).