There is an issue with the meta_key field in all meta tables. This issue is that the fields in all meta tables (_postmeta, _usermeta, _commentmeta, _term_meta) are all set as VARCHAR(255). These should be VARCHAR(191)
The meta_key fields in all of these tables is indexed. However, the maximum length of the key is 191 characters. Actually the byte limit is 767 and in utf8mb4 that give us the 191 limit.
What this means is that when searching for a meta_key value the index may not be used used because the field can be longer than the index prefix. I’m going to be honest I don’t know all of the technical details here. The only thing that I can say is that I have tested installations where I have altered these DB tables to set the meta_key fields to VARCAR(191) and this has resulted in slightly faster queries. Considering the number of queries made to these meta tables or when doing a "meta_query on posts, this increase in speed can be significant
This issue was address for the options table when the option name field was increased from 64 characters and it was discussed there why 191 was a better choice than 255. You can read that entire discussion there. You can read the entire discussion here if you’d like #13310 (Extend option_name to varchar(255)) – WordPress Trac .
Read-only archive : Issues · ClassicPress/ClassicPress · GitHub
Author : John A. Huebner II
Vote count : 20
Status : open
Tags :
difficulty-moderate
request-modify-feature
Comments
I should mention that the difficulty in doing this is tracking down all the places that check the length of the meta_key and altering these checks. The DB change is the easy part.
~ posted by John A. Huebner II
I don’t see why this isn’t implemented, can anyone raise any counter points?
~ posted by Dustin Snider
Any sites that have meta keys with longer than 191 characters will probably experience some breakage due to this change.
I guess one way to handle this would be to scan the meta tables (in batches) before doing an upgrade. If there are no super-long meta keys (this should be the case for the vast majority of sites) then it should be safe to make this change.
~ posted by James Nylen
That’s one of the reasons I was suggesting it right at the beginning. There could be some breakage when transitioning, yes, and I don’t know how that could be handled. But like James says, it should be rare. This is one of the arguments for not doing it already. In my opinion the site is already broken with the field length larger that the index can handle. There’s an index that can’t be used wasting space in the db and this is slowing down queries.
~ posted by John A. Huebner II
It sounds like outside of a few rare cases this is a good idea. The next question is then is priority, should this be added to our V1 release?
~ posted by Dustin Snider
I’m not really opposed, and I see the value of doing this now, but I definitely don’t think we should allow the possibility of a broken site due to this upgrade. We should also bring it before the committee.
From a technical standpoint, that means we need to scan the meta tables and report on any issues, including long keys and whether someone has already modified the indices.
That’s a lot of work, and I don’t see realistically how we can get it developed and tested in time, unless someone new can take it on (as a simple plugin or mu-plugin at first).
If we do get it ready for v1, this check can go in the migration plugin (there will be other, simpler checks too).
If we don’t get it ready in time, we can follow a similar process for the v2 upgrade.
Also, this change generally makes sense to me, but I’d like to see some performance numbers with large amounts of data.
~ posted by James Nylen
I have a large complicated site built in multisite and I also happen to have 3 dev/staging/testing copies of this site. This DB change is actually something that I’ve been looking at doing on this site. If I can scrape the time together I will make the change on one of the dev sites and see if I can get you some numbers on query performance. I don’t know when, really swamped at the moment, but I’ll do my best.
~ posted by John A. Huebner II
I’m not the sharpest tool in the shed when it comes to DB optimization and testing. And I may have some of my facts wrong.
After doing some more research it seems that the major issue with the column length versus the index size for the options table has to do with the fact that in the options table, option_name is a UNIQUE index. In this case, when querying by option_name, the index will not be used.
On the other hand meta_key is not a unique index. I believe that in this case, the index is used as long as the value you’re looking for is shorter than the index length. The index is not used only in the case of querying by for a value in the column that is longer than the index.
At least this is my understanding now.
While optimizing the column size might have an impact on some sites, the only reason that this optimization would have an impact would be by forcing developers to be more careful about the values they’re using for meta_key. As one example, if you are not careful when using ACF repeaters and nested repeaters the meta_key values can easily exceed 191 characters if you use excessively long field and sub field names.
Given this I don’t see that this would be something that I’d be in any hurry to implement. I only suggested it based on what I now believe was a faulty understanding of indexes.
~ posted by John A. Huebner II