A site for solving at least some of your technical problems...
A site for solving at least some of your technical problems...
Working on a website I have to convert invoices to PDF so end users have the ability to print and share the invoices from the website.
In order to do that, I generate an HTML page with the invoice which also gets displayed in the website, and then convert that HTML to PDF with xhtml2pdf. I use that tool instead of whtmltopdf because it does not require X11 to work. The other tool is said to require Qt and X11 and we do not want those things on our backend servers.
So... I upgraded to 14.04 and the tool stopped working with an ugly error:
:process.cpp:396:halk: info: Running process ...
I run a few Drupal website and once per hour I run the cron.php script. I do it only once per hour because nothing changes so often on my websites so it would require faster refreshes.
Once in a while (relatively rarely now) I get a list of errors from CRON saying that the checks failed. The errors look something like this:
HTTP/1.0 302 Found Location: /cgi-bin/ipdiags.ha Pragma: no-cache Content-Type: text/html <html><meta http-equiv=Refresh content=0;url=/cgi-bin/ipdiags.ha> <body></body></html>
As you can see, this is a 302 so a temporary error. ...
I use the Cassandra database cluster system to manage a new set of websites and once in a while I start getting many errors and the website stops working altogether.
When that happens, it is likely that Cassandra broke something in the temporary tables that it holds. The only way to go past that problem is to clear those tables. Until then, it will fail over and over again (they really would need some euristic to auto-clean up even if it means that you're losing some data.)
The command to repair the database, really quick, is as follow:
nodetool scrub snap_websites files
Note that ...
As we are working on a new website, we had a problem where a redirect would not work. I tried both: a simple Redirect and a RedirectMatch as follow:
Redirect / http://finball.m2osw.com/ RedirectMatch permanent ^(.*)$ http://finball.m2osw.com$1
Both of these entries would not work at all.
I verified, to make sure, that the alias module was turned on. It was.
ls -l /var/apache2/mods-enabled
This did list the alias.conf and alias.load entries as expected.
So? What else?
Well... This was installed on a new server and we left the default entry in there:
ls -l ...
I'm starting this page and hope to think about it again at a later time when I find additional tools... but as I have network problems I often need these types of services to make sure I can get the information I need.
For more Network stuff, click on the Network tag!
Check your current IP address from your browser:
http://alexis.m2osw.com/nvg510/my-ip.php [Super clean version!]
http://www.whatismyip.com/ [More advance and with ads...]
This one is for people who setup a DNS to make sure that it can accessed from all over the world. It ...
As I created a new site to list all of my accounts on the Internet, I thought the folder where those accounts appear should be called profiles. But somehow autopath did not generate the URL Alias as expected.
I tried several times and each time it returned an empty alias. Then I tried adding the alias by hand and that was accepted by Drupal, but when I then tried to go to that page it failed with an Apache error which at first I found odd. Then I recalled that there was a folder named profiles in the top directory of Drupal. The Drupal code (from the
I got a new word press website a couple days ago and got it installed in the last few days. There were 3 images missing so I started working on getting them in. When I got the first image, I went to Wordpress and I got an error... with no detailed explaination (maybe there is a log, but I don't know Wordpress that well to tell.)
The error message was just: IO Error
I was pretty sure that the problem was just that the folder where Wordpress tries to upload the new content was write protected from the Apache user. Under Ubuntu and Debian, the default name for that user is www-data and
Today I tried to make use of sftp to transfer a website to SourceForge.net. Unfortunately, it kept giving me an error:
Received disconnect from <IP address>: 2: Too many authentication failures for <username>
I looked around why that would happen and could not really find anything decisive... until I found an issue in the trac system that SourceForge.net uses. That issue mentioned the fact that the ssh-agent could be the culprit.
It was. Somehow the ssh-agent was sending key after key after key... exhausting the number of keys that SourceForget.net will accept and thus made it
One thing that I quickly do on my browsers is turn off warnings about non-secure data when browsing secure pages (with HTTPS .)
It's rarely a problem and with all those features you like to have (Facebook, Twitter, AddThis, ShareThis, Google Plus, and othe fun widgets...) it's hard to avoid. Actually, many times the problem lies in one of these scripts and thus you cannot just fix your website. Without that 3rd party script owner fixing their code, it just won't work at all.
Now, once in a while I work on a customer website and they really want to have a 100% clean slate. Thus,
Today I unearthed an old hard drive with Windows XP on it. After a few hours twiddling I finally got the wireless to work on it... although even before that, the svchost application would make use of 99%+ of the processing time.
With just the default System Manager it's hard to find out what really takes time, so I downloaded procexp.exe from the Windows website (DON'T DOWNLOAD A VERSION FROM ANYWHERE ELSE!) and that showed me the tree and thus which tool was using all the processor time.
The problem was the automatic windows update. (the
Find the page/content you are looking for with our index.
The dump command, under a Unix system, is used to dump the entire file system to another device. By default, the dump output device is a tape device (/dev/tape). Now a day, however, it is often used with other devices such as another file system (from one hard drive to another.)
Other systems use that same keyword. It is particularly the case of database systems. For instance, the PostgreSQL database has a pg_dump command.
The opposite command is restore. That command is used to get the data from the output device and put it back on your hard drive.