Syndicate content

PostgreSQL

Copying a PostgreSQL database from one computer to another

When moving a database from one computer to another, you've got to copy all the data.

For PostgreSQL, there is a pair of functions named pg_dump and pg_restore.

In most cases that works great, however, some systems may create things such as additional functions that will NOT make it across. I do not know how to get that to work (and maybe it works in newer version of PostgreSQL?)

In any event, in most cases the following commands are enough (I used them with Drupal websites and since they stopped using some of the definitions they have in the system.install file used on initialization, ...

PostgreSQL and blobs change in version 9.0 (bytea_output)

As I was moving my Drupal 6.x websites from an old server to a new one, I decided to make use of the newest PostgreSQL database system because over time I've notice quite a few things that got fixed and thus it is a lot better! (It is not as exciting as moving from 7.4 to 8.0 way back then, but still!)

Doing so, my Drupal site started showing WSOD (White Screens Of Death). Looking at the errors in my apache2 logs, I noticed that the error was in link with the cache. Looking closer, I could see that the cache (one of the only table doing so) makes use of the %b flag which means blob. It ...

pg_upgradecluster to move clusters between versions

Until now, I only had to upgrade my PostgreSQL databases between computers so I would pg_dump + pg_restore the databases.

However, PostgreSQL offers a tool named pg_upgradecluster that does that work for you, plus it will copy all the different things that pg_dump doesn't always catch such as functions, existing users, etc.

This sounds all good...

So I ran it against our cluster of 298 databases and got a little surprise: it takes about 10 minutes to run on one database, whatever its size. Yes! 10 minutes. Our big database has over 1 million rows, the smallest have a few hundred. Impressive

PostgreSQL and UNICODE

Trying to create a database with UNICODE does not always work. I ran in a problem with a RedHat installation...

There can be two problems:

  1. The default database setup is ASCII (more or less considered ISO-8859-1)
  2. The database cluster was compiled without UNICODE support

Now a day, the second case is quite unlikely and it means that you won't  be able to use a UNICODE encoding unless you recompile the database system. Make sure you use the ./configure --multi-byte encoding for that purpose.

In the first case, however, it could be that your cluster was initialized ...

PHP CLI Segmentation Fault with pgsql

Somehow, whenever I tried PHP CLI, it would crash because of PostgreSQL. (i.e. if I use the database in PHP through Apache, no problem, but the same script run via the command line, and it crashes on exit, no matter what I do with the resource handle.)

I was puzzled and did not really know what to do, but some people have found the solution. Now, what is really surprising is the fact that it has been around for ages and not yet fixed in the system.

The fact is, it does not matter too much as long as there is a work around, but you need to apply it yourself...

Table owner in PostgreSQL

System tables and how to edit them is just not defined in the SQL language, unfortunately. So different database systems have different solutions to handle their table declarations. PostgreSQL is no different.

In most cases, MySQL uses some special instruction to update system information. In PostgreSQL, you have internal tables that the administrator can access and tweak as required.

[toc hidden:1]

Auto-Vacuum not working? (and solution)

Since I manage multiple computers and each run a PostgreSQL database system, I can see discrepancies between versions. (i.e. newer versions fix problems in older versions.)

[toc hidden:1]

OpenSSL and PostgreSQL conflict in PHP

All of last week I've been looking for the solution to a bug in one of my numerous websites.

The problem was that after registering, a user would get an error saying that the PostgreSQL connection was lost. I had never had that with Drupal and yet, it would be 100% consistent.

I tested the new code several times and had no problems with it, so I was really dismayed that all of a sudden it stopped working.

After quite some time, I finally decided that my new code was the culprit. Weird. It was. It does the same thing as one of the Core modules and yet it breaks the PostgreSQL ...

Extremely slow pg_connect() call

Got to setup a new server and first got the firewall to where I wanted it to be:

  • Block everything except ssh, Apache, SMTP, a few other things, but really not much more than that.
  • Block everything with IPv6 since we don't use it.

Then I installed Apache and a couple of websites.

The first one finally started to work, but it was so very slow to show up. I checked the code, the database, nothing wrong... And the database is lightning fast! ( in comparison to our previous server that is.)

So I wondered and thought maybe that my Apache firewall is in the way. I turned it ...

PostgreSQL VACUUM not working...

When you use a PostgreSQL database, it gets vacuumed every night (by default on most systems, verify on yours, it may not be the case...) This is a quite simple process to make sure that the database remains fast. And it works great, also. You can have millions of entries and it remains very fast (they have real good algorithms to auto-generate index files as required by your repetitive queries.)

The problem is that tables are being grown for 24h and then they get cleaned up. They can then grow again for 24h. If you don't clean up, the tables continue to grow, fast. Up to the point where ...

Syndicate content Syndicate content

Diverse Realty

Diverse Realty Team

Want a New Home?
Want to Sell Your House?

Call Alex at
+1 (916)
220 6482

Alexis Wilke, Realtor
Salesperson
Lic. # 02024063

Cory Marcus, Broker
Lic. # 01079165

     

Terms of Site Index

Find the page/content you are looking for with our index.