Skip to Content »
online discount medstore
advair diskus for sale
buy advair diskus without prescription
allegra for sale
buy allegra without prescription
aristocort for sale
buy aristocort without prescription
astelin for sale
buy astelin without prescription
atarax for sale
buy atarax without prescription
benadryl for sale
buy benadryl without prescription
buy clarinex without prescription
clarinex for sale
buy claritin without prescription
claritin for sale
buy flonase without prescription
flonase for sale
buy ventolin without prescription
ventolin for sale
amoxil for sale
buy amoxil without prescription
augmentin for sale
buy augmentin without prescription
bactrim for sale
buy bactrim without prescription
biaxin for sale
buy biaxin without prescription
buy cipro without prescription
cipro for sale
buy cleocin without prescription
cleocin for sale
buy dexone without prescription
dexone for sale
buy flagyl without prescription
flagyl for sale
buy levaquin without prescription
levaquin for sale
buy omnicef without prescription
omnicef for sale
amaryl for sale
buy amaryl without prescription
buy cozaar without prescription
cozaar for sale
buy diabecon without prescription
diabecon for sale
buy glucophage without prescription
glucophage for sale
buy glucotrol without prescription
glucotrol for sale
buy glucovance without prescription
glucovance for sale
buy micronase without prescription
micronase for sale
buy prandin without prescription
prandin for sale
buy precose without prescription
precose for sale
buy cialis professional without prescription
cialis professional for sale
buy cialis soft without prescription
cialis soft for sale
buy cialis super active without prescription
cialis super active for sale
buy cialis without prescription
cialis for sale
buy levitra without prescription
levitra for sale
buy viagra professional without prescription
viagra professional for sale
buy viagra soft without prescription
viagra soft for sale
buy viagra super active without prescription
viagra super active for sale
buy viagra super force without prescription
viagra super force for sale
buy viagra without prescription
viagra for sale
buy celebrex without prescription
celebrex for sale
buy colcrys without prescription
colcrys for sale
buy feldene without prescription
feldene for sale
buy imitrex without prescription
imitrex for sale
buy inderal without prescription
inderal for sale
buy indocin without prescription
indocin for sale
buy naprosyn without prescription
naprosyn for sale
buy pletal without prescription
pletal for sale
buy robaxin without prescription
robaxin for sale
buy voltaren without prescription
voltaren for sale

Tech Life of Recht » archive for 'Databases'

 Playing it safe: Choosing database engines

  • December 13th, 2007
  • 11:32 pm

The project I’m working on now is using Oracle for data storage. Not only that, it’s good old 9.2. Try installing that on your brand new Ubuntu. Oh well, it’s doable, but recently we’ve begun discussing if we could run it on MySQL or PostgreSQL (I’m voting for PostgreSQL, but that’s mostly because I used MySQL back when foreign keys and other advanced features weren’t exactly implemented, and in the meantime, I’ve come to like PostgreSQL. At the very least, hitting Ctrl-c in psql does not exit the prompt like mysql does).
The way it works now is that if something goes wrong, we can either call Oracle or our strategic Oracle partner, and they’ll fix it. This is just about everything I’m against: paying large amounts of money just because then we can blame somebody else when things screw up.
There are basically two issues with switching to a more light-weight database, and they’re probably more or less the same on many projects. First of all, performance. Oracle probably has a good advantage there, especially in handling large and complex queries. Secondly, maintenance, especially backup and failover.
The only one of these points which I think is valid is backup – it doesn’t really help if it takes a week to backup or restore data. However, when that’s said, we’re left with two things: Performance and reliability. The Oracle way (I’m using this term as the broadest possible. Replace Oracle with MSSQL or DB2 if you like) is to add more memory, disk, clustering, high availability. Expensive, but you get to live in your little world where you can just write code against one large database.
The other way, which I prefer (and which is probably in the Web 2.0 spirit) is to distribute. Distribute both data and processing to a number of autonomous nodes which can operate independently of each other. This is no news, and has been done many times, but it’s not something that’s normally considered when building good old business applications.
The result of distributing is essentially that you think about how you’re accessing your data. Instead of just delegating al of the work to the database and hoping for the best, you’re actually forced to analyze data relationships to discover separate components. If this succeeds, the choice of database should no longer be about whether it can optimize a query over 50 tables with subselects, type conversions, views, functions, and other stuff, but if it is efficient at looking up simple data relations. My guess is that all the popular databases can do this, so then you’re free to choose the cheapest, the one which is easiest to work with, or whatever suits your environment.

Returning to the project I’m working on, we have some reservations about some of the queries we’re executing. They’re on Oracle syntax right now, but can probably be converted to regular SQL in a finite amount of time. That doesn’t make them any smaller, but we have a good amount of pretty static data (addresses, classifications, and so on), which are retrieved together with the more dynamic data. It the static data is removed from the SQL, we end up with some much simpler queries, which shouldn’t be a problem for any database engine. The problem which remains is how the static data is retrieved effeciently. At the moment, I’m leaning towards a solution where we implement a service which can take a list of data keys and return the data. Depending on the amount of data, the service can then be implemented as a in-memory map, a memcached cache, or maybe even something like Hadoop. No matter what, basing the model on a basic principle of isolating static data from the dynamic, and only querying the dynamic data seems like the way to go as a first step – and as a nice side effect, the dependency on the database’s ability to perform doesn’t matter that much anymore.

This probably just sound like drunken ramblings to those who have actually implemented distributed business systems, but bear with me, it’s a first for me, and I need to get things out of my head before the space runs out.

 Debugging Postgresql locks

  • April 18th, 2007
  • 10:47 pm

Having trouble with deadlocks in Postgresql? Or do you just want to know what’s happening in the server? Then read on, here are some of the answers:

List active locks on a database:
[code]
spam=> select l.* from pg_locks l, pg_database d where l.database = d.oid and d.datname = ‘spam’;
locktype | database | relation | page | tuple | transactionid | classid | objid | objsubid | transaction | pid | mode | granted
———-+———-+———-+——+——-+—————+———+——-+———-+————-+——-+——————+———
relation | 1713927 | 10342 | | | | | | | 84974075 | 21333 | AccessShareLock | t
relation | 1713927 | 1713941 | | | | | | | 84974074 | 21775 | AccessShareLock | t
relation | 1713927 | 1713941 | | | | | | | 84974074 | 21775 | RowShareLock | t
relation | 1713927 | 1713955 | | | | | | | 84974074 | 21775 | AccessShareLock | t
relation | 1713927 | 1713957 | | | | | | | 84974074 | 21775 | RowExclusiveLock | t
relation | 1713927 | 1713974 | | | | | | | 84974074 | 21775 | AccessShareLock | t
relation | 1713927 | 2398435 | | | | | | | 84974074 | 21775 | AccessShareLock | t
relation | 1713927 | 2408292 | | | | | | | 84971073 | 19967 | AccessShareLock | t
relation | 1713927 | 2408292 | | | | | | | 84971073 | 19967 | RowExclusiveLock | t
relation | 1713927 | 2408292 | | | | | | | 84971735 | 20050 | AccessShareLock | t
relation | 1713927 | 2408292 | | | | | | | 84971735 | 20050 | RowShareLock | t
tuple | 1713927 | 2408292 | 0 | 4 | | | | | 84971735 | 20050 | ExclusiveLock | t
(12 rows)
[/code]

Here we have 6 locks on database spam. The lock modes are described here, but briefly they mean this:

  • AccessShareLock: The table is locked for shared reading – many transactions can lock for reading concurrently
  • RowShareLock: A row is locked for shared reading
  • RowExclusiveLock: A row is locked for writing (INSERT, UPDATE, DELETE)
  • ExclusiveLock: The table is locked for writing

Exclusive locks prevent others from accessing the data until the lock has been released, and shared locks prevents exclusive locking until they are released.

The relation column from pg_locks points to the table being locked. First, the table name can be found:
[code]
spam=> select relname from pg_class where oid = 2408292;
relname
———
testing
(1 row)
[/code]

So, the last lock in the list is on “testing”. Further, the lock is a tuple lock, meaning on specific rows. In this case, it’s the row at page 0, tuple 4. These numbers are internal, to get the actual data, do like this:
[code]
spam=> select * from testing where ctid = ‘(0,4)’;
id
—-
2
(1 row)
[/code]

The ctid field is an internal field, which identifies the row by it’s physical location. So, we now know exactly which row is locked. We can also see that there are two transactions locking the “testing” relation. Who is waiting for who is not clear, but if we look at the executed commands, it’s possible to see who’s waiting (this requires “stats_command_string = on” in postgresql.conf):

Use the value from the pid column, which represents the process id of the handling process:
[code]
spam=> select current_query, query_start from pg_stat_activity where procpid = 20050;
current_query | query_start
————————————————+——————————-
select * from testing where id = 2 for update; | 2007-04-18 22:35:43.738204+02
(1 row)
[/code]

So, a select query which locks a row (id=2, as expected).

Generally, the pg_* tables contains a lot of useful information, so it’s worth taking a look at them. They are all documented at the postgresql site.