Irc-Unix.net

Главная | Actual Topics | Обратная связь | Guest Book | В избранное | Сделать домашней
Категории
 System & Utilities
 Unix News
 OS Emulator
 Developing
 Learning/Education
 Games
 Humour
Каталог статей
Все статьи

Antispam
Статьи
Биллу Гейтсу тоже предлагают избавиться ...
Вымогательство в борьбе со спамом
Календарь

July, 2018
ПнВтСрЧтПтСбВс
1
2345678
9101112131415
16171819202122
23242526272829
3031
Опросы
Какой из этих ОС Вы отдаете большее предпочтение?

QNX
FreeBSD
Linux(any)
Solaris
Mac OS
Windows XP
Windows 2003
Что такое ОС? :)


Результаты
Другие опросы

Всего голосов: 326
Комментарии: 0
Ссылки

Архив Новостей
 July 2018 (6)
 June 2018 (13)
 May 2018 (10)
 April 2018 (14)
 March 2018 (11)
 February 2018 (13)
 January 2018 (13)
 December 2017 (14)
 November 2017 (15)
 October 2017 (19)
 September 2017 (18)
 August 2017 (13)
 February 2017 (14)
 January 2017 (19)
 December 2016 (16)
 November 2016 (16)
 October 2016 (21)
 September 2016 (18)
 August 2016 (16)
 July 2016 (16)
 June 2016 (20)
 May 2016 (18)
 April 2016 (15)
 March 2016 (22)
 February 2016 (17)
 January 2016 (15)
 December 2015 (15)
 November 2015 (22)
 October 2015 (20)
 September 2015 (17)
 August 2015 (25)
 July 2015 (20)
 June 2015 (23)
 May 2015 (21)
 April 2015 (17)
 March 2015 (19)
 February 2015 (9)
 January 2015 (23)
 December 2014 (9)
 November 2014 (13)
 October 2014 (12)
 September 2014 (18)
 August 2014 (20)
 July 2014 (10)
 June 2014 (12)
 May 2014 (12)
 April 2014 (10)
 March 2014 (22)
 February 2014 (10)
 January 2014 (8)
 December 2013 (26)
 November 2013 (53)
 October 2013 (40)
 September 2013 (48)
 August 2013 (63)
 July 2013 (56)
 June 2013 (52)
 May 2013 (49)
 April 2013 (67)
 March 2013 (74)
 February 2013 (63)
 January 2013 (62)
 December 2012 (62)
 November 2012 (66)
 October 2012 (68)
 September 2012 (48)
 August 2012 (75)
 July 2012 (60)
 June 2012 (71)
 May 2012 (69)
 April 2012 (85)
 March 2012 (86)
 February 2012 (90)
 January 2012 (81)
 December 2011 (103)
 November 2011 (118)
 October 2011 (74)
 September 2011 (2)
 June 2011 (110)
 May 2011 (118)
 April 2011 (111)
 March 2011 (112)
 February 2011 (101)
 January 2011 (119)
 December 2010 (117)
 November 2010 (118)
 October 2010 (131)
 September 2010 (117)
 August 2010 (226)
 July 2010 (351)
 June 2010 (305)
 May 2010 (319)
 April 2010 (343)
 March 2010 (329)
 February 2010 (311)
 January 2010 (312)
 December 2009 (266)
 November 2009 (156)
 July 2009 (101)
 June 2009 (279)
 May 2009 (365)
 April 2009 (348)
 March 2009 (347)
 February 2009 (323)
 January 2009 (318)
 December 2008 (237)
 November 2008 (155)
 October 2008 (334)
 September 2008 (310)
 August 2008 (343)
 July 2008 (362)
 June 2008 (322)
 May 2008 (501)
 April 2008 (1276)
 March 2008 (1658)
 February 2008 (250)
 January 2008 (6)
 November 2007 (1)
 September 2007 (1)
 June 2007 (1)
 May 2007 (1)
 March 2007 (1)
 January 2007 (2)
 December 2006 (1)
 October 2006 (2)
 September 2006 (1)
 August 2006 (2)

Michael Pyne (mpyne): Implementing a shared cache: Part 2

System & Utilities

In my last post, I gave some background on what a shared-memory cache is, and how KDE already uses one (KPixmapCache) to save memory and make the desktop more efficient. I also noted how the current implementation leaves some things to be desired, and hinted at a new implementation I was working on.

In this second part, I’ll discuss some of the basic design principles of the new class, which I called KSharedDataCache.

Why a new class?

If you didn’t read Part 1, you may be wondering why I don’t just fix the current implementation, KPixmapCache, instead of writing new code. It’s a good question, but the short story is that due to the public API used by KPixmapCache, it is non-trivial (to say the least :) to improve KPixmapCache and take some necessary steps to improve its performance. The penalty of getting it wrong is pretty severe as well, as there have been probably hundreds of reported crash bugs already due to KPixmapCache.

So, someone on IRC gave me the idea that, why don’t I just make my improvements in a different class, like a KPixmapCache2 and move the majority of the current users of KPixmapCache to use that instead? It sounded like a good option to me, so that’s what I started on, eventually settling on a generic cache layer under a slightly more specialized image-handling cache.

KSharedDataCache

KSharedDataCache is a class that manages a cache, keyed by QString values, and holding QByteArrays for generality. The cache is held in shared memory, which is accessed across multiple processes based on the cache name (which is converted internally to a file name).

The central data structure is the cache itself. Everything that is needed to be able to insert items, find items, and otherwise manage the cache is kept in the same memory segment, instead of being split into two different files like in KPixmapCache. A very ugly drawing of the layout would look like this:

Block diagram of the KSharedDataCache memory layout

Starting from the left, we have the header for the shared cache itself. This contains several important pieces of data, including the cache size, the page size (which is adjustable), the number of free pages, and the mutex which protects against concurrent access to the shared data.

One note about the mutex, is that it is used instead of KLockFile. It requires support for process-shared POSIX thread primitives (which is required for XSI-conformant systems, but was not present in Linux/glibc until NPTL IIRC). As long as your system tells the truth about whether it supports process-shared primitives KSharedDataCache will still work (even if it can’t use shared memory).

After the cache header, the entry index table is located (starting from the first byte meeting alignment criteria to avoid crashing on non-x86 systems… although I have none to test!). This table is a fixed-size table, based on the total cache size and page size. Entries are placed into the entry table based on the hash of the entry key, and each entry contains information such as the item size, hash code, use count, time of last access, and location of its data.

Collisions are possible with any hash table. The standard answer to handling collisions is to use a method called “chaining” to just make a list of entries which share the same hash code. Unfortunately dynamic memory allocation is much more involved when you’re dealing with a fixed-size block of shared memory, so currently quadratic probing is used to try to seek out other, hopefully empty candidates. Since this is just a cache, the probing is only continued for a small number of attempts.

Following the entry table is the page table, which simply records the entry currently using every page in memory. It is possible to compress the page table by using a bit vector, and making a full page table only when needed (currently only during defragmenting) but I didn’t have time to implement that.

Finally, the rest of the shared memory is devoted to a paged memory allocation system (this is probably the most suboptimal part of my current implementation, but at least it can be fixed later this time ;). Every entry is stored in this data area, with the key that is used followed by the actual QByteArray data.

Resolving an access for an item with a key of “juk_128×128.png” would work something like this:

  1. Lock the cache. If unable to acquire the lock, assume the cache is corrupt, unlink it on disk, and create it all over again.
  2. Convert the key to a byte array using the UTF-8 encoding, then determine the hash code.
  3. Use the hash code to find the appropriate entry index. Compare the hash code to the candidate entry’s hash code, and if they don’t match, use quadratic probing to find another candidate. Give up if the entry is not found within several attempts.
  4. If the hash codes match, search the matching data area to determine the saved key value, and make sure the keys also match. If they don’t match then go back to before, using quadratic probing to find a match.
  5. If the keys did match, we found our entry. Update the entry’s use count and last access time, and then copy the data out of the page or pages to return to the caller. Since this all happened in shared memory this should be much much faster than loading it from disk (assuming of course that the operating system hasn’t paged out the shared memory to disk in the meantime).

You might have noticed that I possibly unlink the cache with reckless abandon in step 1. I actually do this in many more places, the idea being that a corrupt cache can lead to bugs that are very hard for the end user to diagnose and correct, and by definition a cache can be expected to drop entries at any time. The only danger would be tampering with a cache that other processes are currently using in shared memory. By unlinking (and only unlinking) the cache, the other processes can continue to use the inode that used to be associated with the file, and the kernel will finish the cleanup when the other processes exit.

Of course I’m up past a thousand words now, so I’ll continue in Part 3, where I’ll discuss how pointers work in shared memory, how initial cache setup is performed, and my attempts at handling cache pressure, defragmentation, etc.





  


Разместил: Planet KDE | Дата: 01.05.2010 | Прочитано: 573 | Раздел: System & Utilities   

Рейтинг статьи

Средняя оценка: 0.00/0Средняя оценка: 0Всего голосов:0

Отлично
Хорошо Нормально Пойдёт Плохо


Смотрите также связанные темы

12.07.2012 QNX Director of Advanced Technologies Named to Ottawa Business Journals Forty Under 40
OTTAWA, June 20, 2012 QNX Software Systems Limited, a global leader in embedded software and a subsidiary of Research In Motion Limited RIM, today announced that Dan Cardamore, the companys director of advanced technologies, has earned a place on the Ottawa Business Journals prestigious Forty Under 40 list. As director of advanced technologies, Cardamore is responsible for QNX Software Systems product prototyping and research activities. He played an instrumental role in the development of the BlackBerry PlayBook OS and its highly popular software update 2.0 released this year. As part of thi...
10.01.2010 Top 5 Planet V12n blog posts week 01
Top 5 Planet V12n blog posts week 01 Another year, another week. Here we go again. Extremely busy week and announcements of companies acquiring other companies all over the place. Bloggers joining vendors or just changing jobroles. Crazy week, and same goes for the quality of the articles this week. Steve Kaplan - Microsoft’s attempt to commoditize virtualizationDespite these many unique attributes, VMware's most compelling differentiator may be its astounding reliability. Unlike Hyper-V, it offers data center stability, performance and security that is independent from the bloat, reli...
01.02.2010 Top 5 Planet V12n blog posts week 04
Top 5 Planet V12n blog posts week 04 Week 4 already. Just one week before VMware Partner Exchange kicks off in Las Vegas. I'll be around, but not for PEX we're doing VCDX panels. I guess this week was all about the NetApp/Cisco/VMware announcement. And for those who missed, be sure to read this article by Vaughn as it captures the essence of the announcement. Now without further ado; here's the top 5: Luc Dekens - dvSwitch scripting – Part 6 – Private VLANAnother post in the dvSwitch series. This time I’ll tackle the creation and use of a private VLANs (PVLAN) on a dvSwitch. Fo...
21.03.2010 Top 5 Planet V12n blog posts week 11
Top 5 Planet V12n blog posts week 11 I had a lot of catching up to do this week. Due to the VCDX Defenses in Munich I did not have a lot of time to read blog articles during the week. Normally I take 30 minutes every day, at least, to catch up and read the interesting articles my favourite bloggers wrote. This week I had to prep next days sessions. We had eleven candidates and each of them handed in at least 100 pages of documentation(Design, Test Plans, Ops, etc). But I did manage to catch up yesterday and today, after doing this I came up with the following top 5: Scott Lowe...
11.05.2010 EMC World day 1 - VPLEX, Joe Tucci & Michael Capellas drop by, and an interesting private cloud TAB
EMC World day 1 - VPLEX, Joe Tucci & Michael Capellas drop by, and an interesting private cloud TAB Day one at EMC World kicked off today. The day had what was to me a bit of an odd structure, with a product announcement to the press, a keynote that really didn't go into the announcement, and then two more keynotes and an executive panel in the afternoon.  Here are some pix from the conference: The big EMC news today was VPLEX, a new federated storage product that, in its current incarnation, should let you VMotion a single virtual machine or an entire data center across 100km or ...
21.05.2010 Exchange 2010 Disk I/O on vSphere
Exchange 2010 Disk I/O on vSphere In the first part of this series on Exchange 2010 on vSphere the focus was scale-up performance of a single Mailbox Server VM.  The results showed performance was great across all points tested, ranging up to 8000 users.  This article will take a look at disk I/O which is an important aspect of Exchange performance. Enhancements made to Exchange 2010 have resulted in a reduction in disk I/Os per second (IOPS) in comparison to previous versions.  This reduction is reported by Microsoft to be as big as 70% in some cases.  Exchange 2010 makes more efficient ...
17.06.2010 Scale-Out Performance of Exchange 2010 Mailbox Server VMs on vSphere 4
Scale-Out Performance of Exchange 2010 Mailbox Server VMs on vSphere 4 This is the third part in a series of blog posts on Exchange 2010 performance on vSphere 4.  In the first post, tests were done to show the scale-up performance of a single Mailbox Server VM hosting up to 8000 users.  The second post was about how increasing the amount of RAM would reduce the number of IOPS, resulting in better performance.  This article is about scale-out performance with multiple Exchange 2010 Mailbox Server VMs.  For a variety of reasons, it often makes sense to have more than one Mailbox Server in a...
15.09.2010 HPC Application Performance on ESX 4.1: NAMD
HPC Application Performance on ESX 4.1: NAMD This is the second part in an on-going series on exploring performance issues of virtualizing HPC applications. In the first part we described the setup and considered memory bandwidth. Here we look at network latency in the context of a single application. Evaluating the effect of network latency in general is far more difficult since HPC apps range from ones needing micro-second latency to embarrassingly-parallel apps that work well on slow networks. NAMD is a molecular dynamics code that is definitely not embarrassingly parallel but is known ...
01.12.2010 HPC Application Performance on ESX 4.1: Memory Virtualization
HPC Application Performance on ESX 4.1: Memory Virtualization This is the third part in an ongoing series on exploring performance issues of virtualizing HPC applications. In the first part, we described the setup and considered pure memory bandwidth using Stream. The second part considered the effect of network latency in a scientific application (NAMD) that ran across several virtual machines.  Here we look at two of the tests in the HPC Challenge (HPCC) suite:  StarRandomAccess and HPL. While certainly not spanning all possible memory access patterns found in HPC apps, these two tests ...
08.12.2011 A week in virtualization
A week in virtualization Today, you can join in the Beta testing fun for the new customer portal we’ve been working on for the past months. It is called My VMware and is focused on simplifying and streamlining your interactions with us. It will allow you to more easily manage your licenses and support, and it’s coming in the first half of 2012. Read more and join the beta on the VMware Support Insider blog at blogs.vmware.com/kb vFabric Application Performance Manager is now available, to deliver a new approach to managing applications in the cloud. It is focused on managing the health and ...
Нет комментариев. Почему бы Вам не оставить свой?
Вы не можете отправить комментарий анонимно, пожалуйста зарегистрируйтесь.
Google Search
Google

Web irc-unix.net

Топ Новостей
1: Linux distros aren\'t updating WebKit, making web browsers and email clients vulnerable
Hot NEWS!
Просмотров - 4986


2: Akonadi/KMail issues on Tumbleweed?
Просмотров - 627

3: Netrunner Desktop 16.09 "Avalon" Linux OS Is Out with Kernel 4.7, KDE Plasma 5.7
Просмотров - 593

4: KDE\'s Kirigami 2.0 Framework for Convergent UIs Enters Beta with New Features
Просмотров - 581

5: KDevelop 5.0.2 released for Windows and Linux
Просмотров - 575

6: HIG about Simple vs. Advanced Settings
Просмотров - 570

7: Interview with Esfenodon
Просмотров - 548

8: 3.0 Pre-alpha 3 is out!
Просмотров - 510

9: Multi-screen woes in Plasma 5.7
Просмотров - 503

10: Embrace Open Source culture: the 5 common transformations.
Просмотров - 496

11: GSoC Update 1: The Beginning
Просмотров - 485

12: Fedora and KDE/spin\'s treatment - Discussion
Просмотров - 479

13: [TORRENT] chakra-2016.02-ian-x86_64.iso
Просмотров - 479

14: Qt SCXML and State Chart Support in Qt Creator
Просмотров - 460

15: Interview with Neotheta
Просмотров - 452

Google 120X240
Ссылки

Главная | Actual Topics | Статьи | Обратная связь | Guest Book
Генерация: 1.440 сек. и 13 запросов к базе данных за 1.393 сек.
Powered by SLAED CMS © 2005-2007 SLAED. All rights reserved.