Jump to content

Talk:Redis

About this board

Config Redis MW 1.41.2 / Jobrunner Fatal error

2
Keepersdungeon (talkcontribs)

Hello, Was trying to follow the steps to use Redis for caching but i'm getting a Fatal error when I run redisJobChronService. I added the following to LocalSettings.php

$wgMainCacheType = 'redis';

$wgObjectCaches['redis'] = [

'class'                => 'RedisBagOStuff',

'servers'              => [ '...redis.sock' ]

];

$wgSessionCacheType = 'redis';

$wgJobTypeConf['default'] = [

'class'          => 'JobQueueRedis',

  'redisServer'    => '...redis.sock',

  'redisConfig'    => [],

'daemonized'     => true

];

the I cloned mediawiki/services/jobrunner as suggested to the wiki root and followed the steps creating config.json with

{

        "groups": {

                "basic": {

                        "runners": 0

                }

        },

        "limits": {

        },

        "redis": {

                "aggregators": [

                        "...redis.sock"

                ],

                "queues": [

                        "...redis.sock"

                ]

        },

        "dispatcher": "nothing"

}

but when I run "php redisJobChronService --config-file=config.json"

I'm getting

Fatal error: Uncaught error: Class "wikimedia\IPUtils" not found

Anyone help would be appreciated. Thanks

Also for the '...redis.sock' do I need to add the port at the end or just the path ?

Egel (talkcontribs)
Reply to "Config Redis MW 1.41.2 / Jobrunner Fatal error"

MW 1.34 (1.36 in update below): Here's how to add support for setting the reddis database.

3
TazzyTazzy (talkcontribs)

I've made a gist to replace the 'getConnection' function for mediawiki 1.34. 1) Edit the file: mediawiki/includes/libs/redis/RedisConnectionPool.php

2) Replace 'getConnection' function with this one from the gist.

3) This adds 4 line of code at the top:

              global $wgRedisDatabase; 
               if (is_null($wgRedisDatabase)) { 
                 $wgRedisDatabase = 0; 
               }

4) Near the bottom, add "$conn->select($wgRedisDatabase);"

See this gist: https://gist.github.com/yombo/81ac7c5be47ccc28c3b1b5c43d90dcc2

TazzyTazzy (talkcontribs)
TazzyTazzy (talkcontribs)
Reply to "MW 1.34 (1.36 in update below): Here's how to add support for setting the reddis database."

Redis consuming too much memory - maxmemory_policy?

4
77.138.37.184 (talkcontribs)

I previously assumed that MediaWiki sets expiration for keys, but seeing the memory usage of redis baloon I realize I was probably wrong - my wiki just isn't that big. Now, Redis' default policy is 'noeviction', but at least mediawiki-vagrant's default is 'volatile-lru'. Which should I use? Does MediaWiki handle expiration at all? TIA.

FreedomFighterSparrow (talkcontribs)

Should have logged in...

SterlingGraceTech (talkcontribs)

Can anyone help with this? I'm experiencing the same issue. Redis has 1 GB configured for memory and crashed a couple weeks after being installed to handle job queue, sessions and cache. I had to dump all keys in redis manually using redis-cli in order to get this to work...

I configured with volatile-lru and 3 weeks later logins start failing again but this time redis logs look normal (ie. just saving to disk every 5 minutes).

Why doesn't mediawiki have better management for expiring keys, this seems like it should be one of the first considerations when planning this functionality. Can the devs weigh in on whether this is on the roadmap or if there's some configuration we're missing?

Can someone explain what the right configuration is for maxmemory-policy?

I worry the most about losing scheduled jobs. Can someone provide a command that can be run from the redis-cli to selectively remove cache and/or session keys but leave the job queue keys alone?

SirDarioTheFifth (talkcontribs)

It's an old issue, although something I found myself into. My Mediawiki install was using 7.5gb ram for redis-server alone.


configure:

maxmemory 2gb

maxmemory-policy allkeys-lru (for example, look what sounds the best fit for you)


then stop the redis / redis-server and start it again.

Reply to "Redis consuming too much memory - maxmemory_policy?"

Clearing claimed, active jobs from a Redis job queue

1
Justin C Lloyd (talkcontribs)

I've seen this happen when my job queues were still in MySQL, so it was easy to clear them with SQL statements. However, I'm not sure how to clear jobs that show as claimed and active by showJobs.php --group but don't show at all with just showJobs.php:


refreshLinks: 0 queued; 2093 claimed (2093 active, 0 abandoned); 0 delayed

enotifNotify: 0 queued; 2 claimed (2 active, 0 abandoned); 0 delayed

refreshLinksPrioritized: 0 queued; 2 claimed (2 active, 0 abandoned); 0 delayed

refreshLinksDynamic: 0 queued; 9 claimed (9 active, 0 abandoned); 0 delayed

smw.changePropagationUpdate: 0 queued; 620 claimed (620 active, 0 abandoned); 0 delayed


The queue has looked like this for days and I have verified that other jobs are being created and then handled by my job runner service.

I'd like to actually get these jobs to be processed and not just delete them from the queue, if possible, though I'm not sure of even the right way to do that in Redis.

Any suggestions?

Reply to "Clearing claimed, active jobs from a Redis job queue"

Need to clear the cache after a DB restore

3
BertrandGorge (talkcontribs)

Hello, I've restored a MW DB, but the cache on Redis keeps showing data from prior the restoration. Is there a way to invalidate or clear all the redis cache ?

I've tried recreating the docker image for redis, without effet. Any idea welcome !

KHarlan (WMF) (talkcontribs)

Are you using a volume mount with redis to persist its data? If not, restarting redis should be enough. If you are using a volume mount, try `docker volume rm` for the redis data volume.

BertrandGorge (talkcontribs)

Hello Kosta, sorry I actually missed your answer (I should tune my notifications!). I added a script to purge all the keys for each wiki I have. You are right that I could simply recreate the container, it should do the job as well (but I might also drop the PHP sessions ongoing on the server)!

Reply to "Need to clear the cache after a DB restore"

Selecting a Redis database ID to use?

2
Bctrainers (talkcontribs)

Hi,

I looked over the code (from what i could see for Redis), and did not come across a function or setting that could be set in the array (wgObjectCaches / wgJobTypeConf) section to define what database to store data to. Right now, MediaWiki is storing all redis data in database 0 - which well, isn't quite ideal for my setup. I would like to have it stored in a database that is non-zero.


In doing so, I've attempted using the traditional way of appending /12 to the server array segment in the form of

From $wgObjectCaches['redis'] = array(

'servers'           => array( '192.168.20.206:6379/12' ),

From 'redisServer' within $wgJobTypeConf['default'] = [

  => array( '192.168.20.206:6379/12' ),

This does not work, and causes sessions to fail, and ultimately causes a 503 on the nginx server (running php 7.3 fpm).


So i come here to ask this... is there any support for MediaWiki to use a set database ID on Redis?

2403:5800:9100:BE00:550:55BC:EFE5:644A (talkcontribs)
Reply to "Selecting a Redis database ID to use?"

What's a good connectTimeout when 'persistent' => true, ?

1
Deletedaccount4567435 (talkcontribs)

Assume the php-fpm process last 5000 s long. and I set 'persistent'        => true,

Is it better to set 'connectTimeout' => 5000, instead of default 1s?

Reply to "What's a good connectTimeout when 'persistent' => true, ?"

redisJobRunnerService

4
PJosepherum (talkcontribs)

The instructions do not make clear that in order for Redis to handle the job queue, mediawiki-services-jobrunner is required or else a critical error will occur:

Exception from line 92 of /var/www/html/w/includes/jobqueue/JobQueueRedis.php: Non-daemonized mode is no longer supported. Please install the mediawiki/services/jobrunner service and update $wgJobTypeConf as needed.

I have found the service required, but the instructions are not clear and installation is not intuitive for end-users. Puppet does not appear simple or necessary to install for most users, and it is not immediately obvious that the service is simply a script which needs to be perpetually run.

I am thinking I may be able to develop a solution by modifying the config available at https://github.com/wikimedia/mediawiki-services-jobrunner into a systemd service or init script by some other means.

Any ideas or support would be appreciated as Redis has been suggested as a more responsive cache system for use with some of Extension:Semantic MediaWiki's new features.

Toniher (talkcontribs)

I think I managed to make it work with a code like this:

$wgJobTypeConf['default'] = [
        'class'       => 'JobQueueRedis',
        'redisServer' => "127.0.0.1",
        'redisConfig' => [
                'connectTimeout' => 1
        ],
        'daemonized' => true

];

Basically by adding daemonized => true.

Kghbln (talkcontribs)

That's great news that you got it working. I will try to set up redis for the sandbox wiki.

FreedomFighterSparrow (talkcontribs)

"mediawiki/services/jobrunner" appears to be Wikimedia's way of continuosly running the job queue; AFAIK it's still possible to run it using cron and/or $wgJobRunRate. So adding 'daemonized' => true should indeed be enough. I'm still testing this.

Reply to "redisJobRunnerService"
Oborseth~mediawikiwiki (talkcontribs)

Is there any explanation on how "automaticFailOver" works? How would one configure this to failover automatically?

This post was posted by Oborseth~mediawikiwiki, but signed as Oborseth.

Reply to "Automatic Failover"
Toniher (talkcontribs)
Reply to "Job queue"
There are no older topics