The first thing to remember about PAGELATCH_UP is that while it’s not a lock, it can block.
The screenshot above is from the SQL Observability Tool: Idera SQL Diagnostics Manager for SQL Server. It’s well worth the price.
An observability tool that takes to time to highlight particular activity with bright red backgrounds is definitely giving you a hint that an improvement can be made. In the screenshot above, the boxes with the red background are queries that are blocked by another query. And the reason for the blocking in outlined in the red blocks on the right side: PAGELATCH_UP on wait resources 2:1:1, 2:5:32352, and 2:3:48528.
So, what does that mean?
Well … I first needed to know, what is PAGELATCH_UP? Here are some resources to get started (at the time of this writing these links work; I sure hope they continue to work in the future):
- Diagnosing and Resolving Latch Contention on SQL Server (PDF)
A 2011 whitepaper from Microsoft’s SQL Experts on how to work through latch problems.
It starts out with a description of all the different latches, and describes PAGELATCH_UP and a type of hold on a page in the memory to read the page with a potential for updating it.
A similar description can be found on the MSDN Doc, Q&A on Latches in SQL Server Engine.
Near the beginning of the Diagnosing PDF, there is a section on likely scenario’s that will produce a PAGELATCH_UP contention. One of those scenarios is “High degree of concurrency at the application level”, which is exactly the situation that produced the screenshot above. That screenshot was produced by having around 1000 simultaneous users trying to read the from same table at the same time, repeatedly.
For a “High degree of concurrent at the application level” problem, the whitepaper suggested reading Table-Valued Function and tempdb Contention (broken link: http://go.microsoft.com/fwlink/p/?LinkID=214993). - Table-Valued Function and tempdb Contention / SQLCAT’s Guide To: Relational Engine (PDF)
So, “Table-Valued Function and tempdb Contention” was a blog post on the Microsoft websites at some point in time. And, it has been referenced in a number of other blog posts. However, the original post is no longer available.
But, in 2013 the Microsoft SQLCAT team took the blog post in it’s entirety and copied it into a new whitepaper they were writing, SQLCAT’s Guide To: Relational Engine. This is a fantastic blessing, but using google searches, it really felt like that knowledge had been lost.
Within the whitepaper it describes some key information, PAGELATCH_UPs, PFS and SGAM.
Less detailed information about these components can be found in TempDB Monitoring and Troubleshooting: Allocation Bottleneck.
The important information that the Table-Valued Function and tempdb Contention whitepaper describes is that a PAGELATCH_UP block on Wait Resource 2:1:1 is a block that is waiting to read the first line of the first page of the Page Free Space (PFS) page. In fact, all of those Wait Resource reference are PFS pages. You can determine if a Wait Resource is a PFS page by dividing the 3rd number by 8088. If it is evenly divisible by 8088, then it is a PFS page. (32352 / 8088 = 4, 48528 / 8088 = 6).
The guide then goes on to describe what is happening in the background: The system is looking to allocate some tempdb space in order to hold temporary results as it is processing through a query. In order for the tempdb to reserve some free space it will need to request the amount of space it needs from SGAM, which will in turn ask the PFS system where free space is available. At that point in time, the PFS is read and if it finds free space it will mark that space as allocated (requiring a PAGELATCH_UP handler to read/write the PFS information).
The way the original PFS handler algorithm was written, it first searches from free space at the very first PFS page (2:1:1) and then sequentially looks through PFS pages until it finds free space. - Recommendations to reduce allocation contention in SQL Server tempdb database
SQLCAT’s Guide To: Relational Engine goes on to give a few suggestions on how to improve tempdb PFS contention. Contention has been around for a while and one of the ways that performance can be improved is to split tempdb into multiple files. In the wait resource above the 2:5:32352 number can be read as:
2 = Database Id, 2 is always tempdb
5 = File 5, databases can be split into multiple physical files on disk
32352 = Page 32352, pages are 8-KB in size and they are the smallest allocated size of memory/space the system creates. Page 1 and every page evenly divisible by 8088 are PFS pages.
The very commonly suggested way to approach to splitting tempdb in multiple files is to make a temp physical file for each logical core on the server. If you 8 cores, then make split tempdb into 8 evenly sized files. The reason for this, is that it can allow each core to search a different physical file looking for free space. This reduces contention on the files and their associated latches. - PFS page round robing algorithm improvement in SQL Server 2014, 2016 and 2017
Around May of 2018 the SQL Server team also came up with an improved way to search PFS files in order to reduce contention. Instead of always starting out with 2:X:1 as the first page read in each file, instead it would remember the last page it found space in and start the next search in the first page following that previously stopping point. This has an overall affect of evenly distributing tempdb allocations over the whole of the allocated space.
Many years ago, spreading data out of the disk would have been a performance loss. Especially on spindle hard drives. This is the reason that defrag’s and dbcc shrink push the data next to each other on disk.
However, I don’t know if using regular spindle drives on a SQL server is a very common practice anymore. If you’re running in the cloud or in an on-premise data center, it’s likely that the backing storage is a SAN array of some kind. It might be a SAN array of spindle disks, but it’s still probably a group of disks working together to make a high performance array. At that point, does it really matter if the data is grouped together? (That would be pretty interesting to look up.)
If it doesn’t need to be grouped together, then leaving the tempdb allocations evenly spread seem like a performance increase without any downsides.
Hopefully, that long wall of text helps lower the amount of research needed to find out what PAGELATCH_UP is and why it might be blocking threads in a SQL Server database.
0 comments:
Post a Comment