Compile and install nemo file manager on nemo in Ubuntu Xenial.

Edit:
Just run this (As advised by JosephMcc, a nemo contributor):

mkdir nemosrc
cd nemosrc
git clone https://github.com/linuxmint/nemo.git
sudo apt build-dep nemo
sudo apt install gnome-pkg-tools libxt-dev python3-polib
dpkg-buildpackage
sudo dpkg -i ../*deb
git clone https://github.com/linuxmint/nemo-extensions
cd nemo-extensions
./buildall
sudo dpkg -i *deb

Create /etc/apt/sources.list.d/mint.list:

deb http://packages.linuxmint.com/ rosa main

To install python-nemo:

sudo apt-key adv --keyserver pgp.mit.edu --recv-keys 3EE67F3D0FF405B2
sudo apt update
sudo apt-get -f install
sudo dpkg -i *deb

Closely watch for output. Correct output looks like this:

(Reading database ... 194677 files and directories currently installed.)
Preparing to unpack nemo-audio-tab_3.2.1_all.deb ...
Unpacking nemo-audio-tab (3.2.1) over (3.2.1) ...
Preparing to unpack nemo-compare_3.2.1_all.deb ...
Unpacking nemo-compare (3.2.1) over (3.2.1) ...
Preparing to unpack nemo-emblems_3.2.3_all.deb ...
Unpacking nemo-emblems (3.2.3) over (3.2.3) ...
Preparing to unpack nemo-fileroller_3.2.1_amd64.deb ...
Unpacking nemo-fileroller (3.2.1) over (3.2.1) ...
Preparing to unpack nemo-image-converter_3.2.0_amd64.deb ...
Unpacking nemo-image-converter (3.2.0) over (3.2.0) ...
Preparing to unpack nemo-media-columns_3.2.1_all.deb ...
Unpacking nemo-media-columns (3.2.1) over (3.2.1) ...
Preparing to unpack nemo-pastebin_3.2.1_all.deb ...
Unpacking nemo-pastebin (3.2.1) over (3.2.1) ...
Preparing to unpack nemo-rabbitvcs_3.2.1_all.deb ...
Unpacking nemo-rabbitvcs (3.2.1) over (3.2.1) ...
Preparing to unpack nemo-share_3.2.0_amd64.deb ...
Unpacking nemo-share (3.2.0) over (3.2.0) ...
Preparing to unpack nemo-terminal_3.2.1_amd64.deb ...
Unpacking nemo-terminal (3.2.1) over (3.2.1) ...
Setting up nemo-audio-tab (3.2.1) ...
Setting up nemo-compare (3.2.1) ...
Setting up nemo-emblems (3.2.3) ...
Setting up nemo-fileroller (3.2.1) ...
Setting up nemo-image-converter (3.2.0) ...
Setting up nemo-media-columns (3.2.1) ...
Setting up nemo-pastebin (3.2.1) ...
Setting up nemo-rabbitvcs (3.2.1) ...
Setting up nemo-share (3.2.0) ...
Setting up nemo-terminal (3.2.1) ...
Processing triggers for bamfdaemon (0.5.3~bzr0+16.04.20160824-0ubuntu1) ...
Rebuilding /usr/share/applications/bamf-2.index...
Processing triggers for gnome-menus (3.13.3-6ubuntu3.1) ...
Processing triggers for desktop-file-utils (0.22-1ubuntu5) ...
Processing triggers for mime-support (3.59ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for libglib2.0-0:amd64 (2.48.2-0ubuntu1) ...

To set nemo as default file manager:

xdg-mime default nemo.desktop inode/directory application/x-gnome-saved-search

You can check the default with:

xdg-mime query default inode/directory

It will say failed to sign:

dpkg-buildpackage: error: failed to sign .dsc and .changes file

but that’s not an issue.
The .deb packages will be built to ../

Older instructions:

git clone https://github.com/linuxmint/nemo
cd nemo/
./autogen.sh && ./configure && make && make install

If it throws the following error:

./autogen.sh: 25: ./autogen.sh: aclocal: not found

, do:

sudo apt-get install automake gtk-doc-tools gobject-introspection libgtk-3-dev libgail-3-dev libcinnamon-desktop-dev libxml2-dev libnotify-dev libexif-dev libexempi-dev libselinux1-dev

I also had to manually installer older version packages for “tracker support” because it was not recognizing the newer version that came with Ubuntu Xenial:

wget http://security.ubuntu.com/ubuntu/pool/universe/t/tracker/libtracker-sparql-0.16-0_0.16.5-0ubuntu0.2_amd64.deb
wget http://security.ubuntu.com/ubuntu/pool/universe/t/tracker/libtracker-miner-0.16-0_0.16.5-0ubuntu0.2_amd64.deb
wget http://security.ubuntu.com/ubuntu/pool/universe/t/tracker/gir1.2-tracker-0.16_0.16.5-0ubuntu0.2_amd64.deb
wget http://security.ubuntu.com/ubuntu/pool/universe/t/tracker/libtracker-sparql-0.16-dev_0.16.5-0ubuntu0.2_amd64.deb
sudo dpkg -i *deb
sudo apt-get install -f 

After doing all this I was getting a compiler error:

Makefile:780: recipe for target 'eel-canvas.lo' failed

So I had to pass a flag to ./configure to suppress the error:

/configure --disable-Werror

Next:

make && sudo make install

Now create a shortcut in the menu for Nemo:

cp data/nemo.desktop ~/.local/share/applications/

Now you can launch it by typing “Files” in the menu. If you want to change the name, edit ~/.local/share/applications/nemo.desktop.

[Desktop Entry]
Name=Nemo
Name[en_GB]=Files
Comment=Access and organize files
Exec=nemo %U
Icon=folder
Terminal=false
Type=Application
StartupNotify=false
Categories=GNOME;GTK;Utility;Core;
MimeType=inode/directory;application/x-gnome-saved-search;
X-GNOME-Bugzilla-Bugzilla=GNOME
X-GNOME-Bugzilla-Product=nemo
X-GNOME-Bugzilla-Component=general
X-GNOME-Bugzilla-Version=3.2.2

I submitted an issue on github.

Installing nemo extensions:
Install nemo

sudo apt install nemo
git clone https://github.com/linuxmint/nemo-extensions
cd nemo-extensions
./buildall

You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.
Merging pdf files

Merging pdf files

I’d like to talk about a nifty little utility that helps to combine multiple pdf files into one big file. Of course this is available in Adobe Acrobat Professional, but that’s a heavy weight and expensive software. If you’re good with Linux, you can try your luck with pdftk or ghostpdf. However those command line utilities require a good amount of technical knowhow and have a learning curve. If you use Windows, this small utility from A-PDF might just be your cuppa.

A-PDF Merger can be downloaded from A-PDF site. Once you install it, open it to get the following screen:

mwsnap001

Just drag all the required files and rearrange the order as you see fit. Once you click merge, the files are merged, and you get confirmation whether to open the output file. A-PDF Merger gives you a host of advanced options including a renumbering option and a bookmark addition for each file. A TOC option would have been a great addition. Maybe the authors will add it later.

Download A-PDF Merger and give it a try.


You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.

Wrap output of ps command

Consider a normal ps command:

#ps auxw
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
joel     21383 46.5  1.5  84972 31956 ?        S    23:13   1:13 /usr/bin/perl /home/joel/domains/epi.jo.in/public_html/scheduledscrape.pl --path=http://www.t
www-data 21384  0.0  0.4 342828  9216 ?        S    23:13   0:00 /usr/sbin/apache2 -k start

To wrap output to get full lines, use ww option:

#ps auxww
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.1  29848  3300 ?        Ss   Feb11  13:22 /sbin/init
joel     21383 45.8  1.5  84972 32012 ?        S    23:13   1:15 /usr/bin/perl /home/joel/domains/epi.jo.in/public_html/scheduledscrape.pl --path=http://www.tv.com/shows/doctor-who-2005/episodes/ --size=anysize
www-data 21384  0.0  0.4 342828  9216 ?        S    23:13   0:00 /usr/sbin/apache2 -k start
www-data 21385  0.0  0.4 342800  9132 ?        S    23:13 

You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.

Summary of printer management on Suse Linux

SUSE Linux is the OS on which ESI workstations and servers run. Recently we received a fleet of HP Officejet Pro printers. I was printing some urgent documents on one of them when a paper got torn in the printer. Unfortunately after clearing the jam, the printer refused to print anymore. The situation seemed incomprehensible when it was found that the printer would successfully print test pages from the printer LCD software, and also from browser based GUI control panel.

I opened the Printer applet found in the SUSE menu. It showed that “Print file sent, waiting for printer to finish”. The applet showed that the printer had jobs pending, and it was in paused state. Right clicking, there was no option to resume, and clicking on resume in the main menu didn’t do anything.

After a lot of searching, I opened the same applet as root in the server. There, the resume option was available on right clicking, and the printer responded to the resume command. On researching options, I came across methods I could have used on the command line to fix this. I found this great resource explaining various cups options.

lpstat -o queuename -p queuename or the Berkeley style command lpq -Pqueuename.

If you do not specify a queue name, the commands display information about all queues.

With lpstat -o, the output shows all active print jobs in the form of a queuename-jobnumber listing. With lpstat -l -o queuename -p queuename, the output is more verbose. lpstat -t or lpstat -l -t displays the maximum amount of available information.

For additional information, consult the manual page of lpq, lpstat, and the section Using the Printing System of the CUPS Software Users Manual (/usr/share/doc/packages/cups/sum.html#USING_SYSTEM).

You should also be aware that if data cannot get through to the printer or if there is some other major trouble with the data transfer (such as a longer interruption of the data link), the CUPS back-end responsible for the data transfer may abort with an error code. The exact circumstances of this depend on the back-end used (whether it is the back-end for the parallel or the USB port, for the LPD or the IPP server, or the one for direct data transfer via TCP sockets). If a back-end aborts, the CUPS server ceases to print via the queue affected and mark it as disabled or stopped. The system administrator, after having removed the cause of the trouble, must reenable these queues with the command /usr/bin/enable queuename (or with /usr/bin/enable -h printserver queuename).

5.7. Command-Line Tools for the CUPS Printing System

The command-line tools of the CUPS printing system and their manual pages are included in cups-client. Further documentation is provided by cups and installed in /usr/share/doc/packages/cups, in particular the CUPS Software Users Manual, found at /usr/share/doc/packages/cups/sum.html and the CUPS Software Administrators Manual at /usr/share/doc/packages/cups/sam.html. If a CUPS daemon runs locally on your host, you should also be able to access the documentation at http://localhost:631/documentation.html.

As a general rule, it is useful to remember that CUPS command-line tools sometimes require options be supplied in a certain order. Consult the corresponding manual pages if you are unsure about specific options.

5.7.1. Managing Local Queues

5.7.1.1. Printing Files

To print a file, enter the System V style print command lp -d queuename file or a Berkeley style command like lpr -Pqueuename file.

Additional information can be obtained with man lpr and man lp as well as in the section Using the Printing System of the CUPS Software Users Manual (/usr/share/doc/packages/cups/sum.html#USING_SYSTEM).

The -o parameter allows specification of a number of important options, some of which directly influence the type of printout. More information is available in the manual page of lpr and lp as well as in the section Standard Printer Options of the CUPS Software Users Manual (/usr/share/doc/packages/cups/sum.html#STANDARD_OPTIONS).

5.7.1.2. Checking the Status

To check the status of a queue, enter the System V style command lpstat -o queuename -p queuename or the Berkeley style command lpq -Pqueuename. If you do not specify a queue name, the commands display information about all queues.

With lpstat -o, the output shows all active print jobs in the form of a queuename-jobnumber listing. With lpstat -l -o queuename -p queuename, the output is more verbose. lpstat -t or lpstat -l -t displays the maximum amount of available information.

For additional information, consult the manual page of lpq, lpstat, and the section Using the Printing System of the CUPS Software Users Manual (/usr/share/doc/packages/cups/sum.html#USING_SYSTEM).

5.7.1.3. Removing Jobs from the Queue

Enter the System V style command cancel queuename-job number or the Berkeley style command lprm -Pqueuename job number to remove the job with the specified number from the specified queue. For additional information, consult the manual page of lprm, cancel, and the section Using the Printing System of the CUPS Software Users Manual (/usr/share/doc/packages/cups/sum.html#USING_SYSTEM).

5.7.1.4. Specifying Options for Queues

To see how to specify hardware-independent options that affect the type of printout, read the section Standard Printer Options in the CUPS Software Users Manual (/usr/share/doc/packages/cups/sum.html#STANDARD_OPTIONS). The section Saving Printer Options and Defaults, which is found at /usr/share/doc/packages/cups/sum.html#SAVING_OPTION, explains how to save option settings.

Printer-specific options affecting the type of printout are stored in the PPD file for the queue in question. List them with the command lpoptions -p queuename -l. The output has the following form:

option/text: value value value …
The currently active setting is marked with an asterisk (*) to the left, for example:

PageSize/Page Size: A3 *A4 A5 Legal Letter
Resolution/Resolution: 150 *300 600
According to the above output, the PageSize is set to A4 and the Resolution to 300 dpi.

The command lpoptions -p queuename -o option=value changes the value for the given option. With the above sample settings in mind, use the following command to set the paper size for the specified queue to Letter:

lpoptions -p -o PageSize=Letter
If the above lpoptions command is entered by a normal user, the new settings are stored for that user only in the file ~/.lpoptions. In contrast, if the lpoptions command is entered by root, the settings specified are stored in /etc/cups/lpoptions and become the default for all local users of the queue. The PPD file is not touched by this, however.

If (and only if) you change the contents of a PPD file for a given queue, the new settings apply to all users in the local network who print through this queue. The system administrator can change the defaults of a PPD file with a command like:

lpadmin -p -o PageSize=Letter
For more information, refer to the Support Database article Print Settings with CUPS.

5.7.2. Managing Remote Queues

For each of the commands explained below, replace printserver with the name or IP address of your print server. queuename must be a queue on the print server. This section merely covers the basic commands. Additional options and information sources are referred to in Section 5.7.1. “Managing Local Queues”.

5.7.2.1. Printing Files

You can use the System V style command lp -d queuename -h printserver file to generate a print job for the specified queue on the specified print server. This is only possible if the print server was configured to accept remote print jobs on its queues. This is not enabled by default in CUPS, but can easily be configured in the CUPS server settings in the YaST printer configuration module.

5.7.2.2. Checking the Status

Check the status of a queue on the print server with the System V style command lpstat -h printserver -o queuename -p queuename.

5.7.2.3. Removing Jobs from the Queue

The System V style command cancel -h printserver queuename-jobnumber removes the print job with the specified job number from the specified queue on the print server.

5.7.3. Using Command-Line Tools for CUPS Troubleshooting

Print jobs will be kept in the printer queue if you shut down the system while a job is being processed. This means a broken print job will still be there even after rebooting and you need to remove it from the queue manually with the commands mentioned above.

Other problems occur if there is some fault in the physical data link between the computer and the printer. The printer may then be unable to make sense of the data it receives and start spitting out lots of pages with garbage on them.

To make sure the printer stops working, first remove all paper from it (in the case of inkjet printers) or open the paper trays (laser printers).

At this point, the print job will often still be in the queue, because jobs are only removed from the queue when all data has been sent to the device. Check which queue is currently printing by entering lpstat -o (or lpstat -h printserver -o) then remove the problematic print job with cancel queuename-jobnumber (or with cancel -h printserver queuename-jobnumber).

Some data might still find their way to the printer in spite of the job having been deleted. To stop this, enter the command fuser -k /dev/lp0 (for a printer at the parallel port) or fuser -k /dev/usb/lp0 (for a USB printer). This kills any processes still using the printer device.

Do a complete reset of the printer by disconnecting it from power for some time. Then put in the paper and switch the printer back on.

You should also be aware that if data cannot get through to the printer or if there is some other major trouble with the data transfer (such as a longer interruption of the data link), the CUPS back-end responsible for the data transfer may abort with an error code. The exact circumstances of this depend on the back-end used (whether it is the back-end for the parallel or the USB port, for the LPD or the IPP server, or the one for direct data transfer via TCP sockets). If a back-end aborts, the CUPS server ceases to print via the queue affected and mark it as disabled or stopped. The system administrator, after having removed the cause of the trouble, must reenable these queues with the command /usr/bin/enable queuename (or with /usr/bin/enable -h printserver queuename).


You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.

Adding datestamp to output apk files in Gradle

In build.gradle, start off with:

import java.text.SimpleDateFormat
def buildTime() {
    def df = new SimpleDateFormat("yyyy-MM-dd'-'HH:mm") //you can change it
    df.setTimeZone(TimeZone.getTimeZone("IST"))
    return df.format(new Date())
}

Now add the following to modify the APK name:

applicationVariants.all { variant ->

        def apk = variant.outputFile;
        def newName;

        def time = buildTime()
        def versionNameSuffix = "-beta-build-${time}"

        newName = apk.name.replace(".apk", "-v" + defaultConfig.versionName + "-" + variant.buildType.name.toUpperCase() + versionNameSuffix + ".apk");

        newName = newName
                .replace("-" + variant.buildType.name, "")
                .replace(project.name, "droidzone");

        variant.outputFile = new File(apk.parentFile, newName);
        if (variant.zipAlign) {
            variant.outputFile = new File(apk.parentFile, newName.replace("-unaligned", ""));
        }

        logger.info('INFO: Set outputFile to ' + variant.outputFile + " for [" + variant.name + "]");
    }

You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.

How to set up key authentication for a server?

If you have SSH installed, you should be able to run..

ssh-keygen

Then go through the steps, you’ll have two files, id_rsa and id_rsa.pub (the first is your private key, the second is your public key – the one you copy to remote machines)

Then, connect to the remote machine you want to login to, to the file ~/.ssh/authorized_keys add the contents of your that id_rsa.pub file.

chmod 600 all the id_rsa* files (both locally and remote), so no other users can read them:

chmod 600 ~/.ssh/id_rsa*

Similarly, ensure the remote ~/.ssh/authorized_keys file is chmod 600 also:

chmod 600 ~/.ssh/authorized_keys

Then, when you do ssh remote.machine, it should ask you for the key’s password, not the remote machine.

To make it nicer to use, you can use ssh-agent to hold the decrypted keys in memory – this means you don’t have to type your keypair’s password every single time. To launch the agent, you run (including the back-tick quotes, which eval the output of the ssh-agent command)

ssh-agent

On some distros, ssh-agent is started automatically. If you run echo $SSH_AUTH_SOCK and it shows a path (probably in /tmp/) it’s already setup, so you can skip the previous command.

Then to add your key, you do

ssh-add ~/.ssh/id_rsa

and enter your passphrase. It’s stored until you remove it (using the ssh-add -D command, which removes all keys from the agent)


You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.

Convert and split video files on the Linux command line

avconv is a nifty utility for converting media files from the command line from linux.
According to the libav documentation page,

avconv is a very fast video and audio converter that can also grab from a live audio/video source. It can also convert between arbitrary sample rates and resize video on the fly with a high quality polyphase filter.

Installing avconv on Debian/Ubuntu

#apt-get install libav-tools

Compiling avconv from source

See my post

General syntax for avconv

avconv [global options] [[infile options][‘-i’ infile]]… {[outfile options] outfile}…

Main options:
‘-i filename (input)’
input file name

‘-y (global)’
Overwrite output files without asking.

‘-c[:stream_specifier] codec (input/output,per-stream)’
‘-codec[:stream_specifier] codec (input/output,per-stream)’

Select an encoder (when used before an output file) or a decoder (when used before an input file) for one or more streams. codec is the name of a decoder/encoder or a special value copy (output only) to indicate that the stream is not to be reencoded.

For example

avconv -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT

encodes all video streams with libx264 and copies all audio streams.

‘-ss position (input/output)’
When used as an input option (before -i), seeks in this input file to position. Note the in most formats it is not possible to seek exactly, so avconv will seek to the closest seek point before position. When transcoding and ‘-accurate_seek’ is enabled (the default), this extra segment between the seek point and position will be decoded and discarded. When doing stream copy or when ‘-noaccurate_seek’ is used, it will be preserved.

When used as an output option (before an output filename), decodes but discards input until the timestamps reach position.

position may be either in seconds or in hh:mm:ss[.xxx] form.

‘-t duration (output)’
Stop writing the output after its duration reaches duration. duration may be a number in seconds, or in hh:mm:ss[.xxx] form.

For more detailed information on these and other options, refer the libav doku.

Splitting a video file with avconv

#avconv -i 'Supernatural S01E20 HDTV.mp4' -vcodec copy -acodec copy -ss 00:00:30 -t 00:00:15 'seg2.mp4'

-ss is the start segment
-t is the duration

Getting media file information with avconv

Run avconv without just the input file name.

#avconv -i supernatural_s04e07.mp4
avconv version 0.8.12-6:0.8.12-1, Copyright (c) 2000-2014 the Libav developers
  built on Jun  1 2014 17:03:01 with gcc 4.7.2
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'supernatural_s04e07.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf53.21.1
  Duration: 00:40:52.53, start: 0.000000, bitrate: 525 kb/s
    Stream #0.0(und): Video: h264 (Main), yuv420p, 624x352 [PAR 1:1 DAR 39:22], 360 kb/s, 23.98 fps, 23.98 tbr, 2997003.00 tbn, 47.95 tbc
    Stream #0.1(und): Audio: mp3, 48000 Hz, 2 channels, s16, 160 kb/s
At least one output file must be specified

Get the duration of a video with avconv

avconv -i "$1" 2>&1 | grep "Duration"| cut -d ' ' -f 4 | sed s/,//

Eg:

avconv -i supernatural_s04e07.mp4 2>&1 | grep "Duration"| cut -d ' ' -f 4 | sed s/,//
00:40:52.53

For more details on avconv, refer the libav documentation page
For libx264 details, refer: https://libav.org/avconv.html#libx264

Tips (More about CRF:
‘-crf float’
Select the quality for constant quality mode.

The Constant Rate Factor (CRF) is the default quality setting for the x264 encoder. You can set the values between 0 and 51, where lower values would result in better quality (at the expense of higher file sizes). Sane values are between 18 and 28. The default for x264 is 23, so you can use this as a starting point.

If you’re unsure about what CRF to use, begin with 23 and change it according to your subjective impression of the output. Is the quality good enough? No? Then set a lower CRF. Is the file size too high? Choose a higher CRF. A change of ±6 should result in about half/double the file size, although your results might vary.

Converting a 1080P video into 640×480 Video:

avconv -i source.mp4 -s vga -c:v libx264 -preset ultrafast -c:a copy output.mp4

You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.

Synchronizing Dropbox files with Hubic

Hubic offers 25 GB of storage for free. It makes sense therefore to transfer your Dropbox-ed files to Hubic.

First download the Dropbox files to your linux server with the dropbox CLI.

cd ~/
wget --content-disposition https://www.dropbox.com/download?plat=lnx.x86_64
tar xf dropbox-lnx.x86_64-2.6.20.tar.gz
~/.dropbox-dist/dropboxd

For 32 bit, you should use https://www.dropbox.com/download?plat=lnx.x86
Alternately, choose a package appropriate for your OS from here.

Now you will be prompted to link your server to Dropbox by visiting a weblink:

#~/.dropbox-dist/dropboxd
This computer isn't linked to any Dropbox account...
Please visit https://www.dropbox.com/cli_link?host_id=d1ecdecdf47d8e38732ffde33256fbb9e43 to link this device.

Enter your Dropbox password to link the server.

This computer is now linked to Dropbox. Welcome Joel

Once it is setup for the first time, you should start the dropbox daemon once again and send it to background.

#~/.dropbox-dist/dropboxd &

Now, install the Hubic package for Debian from http://mir7.ovh.net/ovh-applications/hubic/hubiC-Linux/1.2.4/
Instructions: https://forums.hubic.com/showthread.php?272-hubiC-for-Linux-beta-is-out-!

Summary:

wget http://mir7.ovh.net/ovh-applications/hubic/hubiC-Linux/1.2.4/hubiC-Linux-1.2.4.35-linux.deb
dpkg -i hubiC-Linux-1.2.4.35-linux.deb
apt-get install -f
dbus-daemon --session --fork --print-address

Now use the output of the last command:

export DBUS_SESSION_BUS_ADDRESS='unix:abstract=/tmp/dbus-9VodnJEWDS,guid=7d86192aab25825f0feee264532e4bc4'
hubic login [email protected] Dropbox/
hubic start

To automatise login, do:

#hubic login --password_path=/root/.hubicpassword [email protected] Dropbox/

Here, you should be creating a file /root/.hubicpassword which contains your password in plaintext.

To get the status of synchronization, use the following:

hubic status

Keep the Dropbox service running:
You should install the Dropbox CLI (command line interface) which can be used to get the status of dropbox daemon, start and stop the daemon etc.

Install the CLI:

#mkdir -p ~/bin
#wget -O ~/bin/dropbox.py "https://www.dropbox.com/download?dl=packages/dropbox.py"
#chmod +x ~/bin/dropbox.py
 #~/bin/dropbox.py help

Dropbox command-line interface

commands:

Note: use dropbox help to view usage for a specific command.

status get current status of the dropboxd
help provide help
puburl get public url of a file in your dropbox
stop stop dropboxd
running return whether dropbox is running
start start dropboxd
filestatus get current sync status of one or more files
ls list directory contents with current sync status
autostart automatically start dropbox at login
exclude ignores/excludes a directory from syncing
lansync enables or disables LAN sync

Start the daemon with the CLI:

#~/bin/dropbox.py start
Starting Dropbox...Dropbox isn't running!
Done!

Solutions to common problems:
“Cannot contact daemon, are you sure it is running?”

Try the following:

#dbus-launch --sh-syntax
DBUS_SESSION_BUS_ADDRESS='unix:abstract=/tmp/dbus-eG8rW6f2iR,guid=c51a477c991da9681f80b58a5376246d';
export DBUS_SESSION_BUS_ADDRESS;
DBUS_SESSION_BUS_PID=1552;

Now do:
#export DBUS_SESSION_BUS_ADDRESS=’unix:abstract=/tmp/dbus-eG8rW6f2iR,guid=c51a477c991da9681f80b58a5376246d’

Now try logging into hubic again.


You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.

Perl simple file writing script

Try the following:

open FIL,">temp";
print FIL "Hola";
print FIL "Nada";
close FIL; 

Note:
No commas are allowed after the file handle, before the string to be written. Line breaks are not inserted unless specified.

open FIL,">temp";
print FIL "Hola";
print FIL "Nada";
close FIL; 

You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.

Tracking and reporting a mail spammer

Here’s what I received in my mailbox today:

Delivered-To: [[email protected] removed]
Received: by 10.68.204.234 with SMTP id lb10csp6940pbc;
        Sat, 10 Aug 2013 01:39:05 -0700 (PDT)
X-Received: by 10.60.131.69 with SMTP id ok5mr3471132oeb.70.1376123944596;
        Sat, 10 Aug 2013 01:39:04 -0700 (PDT)
Return-Path: <[email protected]>
Received: from mail-oa0-f46.google.com (mail-oa0-f46.google.com [209.85.219.46])
        by mx.google.com with ESMTPS id sp4si11350425oeb.124.2013.08.10.01.39.04
        for <[[email protected] removed]>
        (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
        Sat, 10 Aug 2013 01:39:04 -0700 (PDT)
Received-SPF: pass (google.com: domain of [email protected] designates 209.85.219.46 as permitted sender) client-ip=209.85.219.46;
Authentication-Results: mx.google.com;
       spf=pass (google.com: domain of [email protected] designates 209.85.219.46 as permitted sender) [email protected]
Received: by mail-oa0-f46.google.com with SMTP id l10so7944474oag.33
        for <[[email protected] removed]>; Sat, 10 Aug 2013 01:39:04 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20120113;
        h=x-original-authentication-results:delivered-to:to:subject:from
         :reply-to:message-id:date;
        bh=zSMRbrl//PMEzHQ6OysHgk48CTaa8Hx+QU92JI9AcuE=;
        b=V0jDd5g44xp/IflBQFEPP/A7WMhSgycsjaYSstyMPUC2DZtHGa3m2kexZR5cYbWZW0
         +kgwdapOCTRshF9sHdP9SJ5IfIhwyyd3TExyjzMun0nVaY0Eb8qBkq+ZyjRCXw6Sq4jL
         oguysxDZlQkd9AKLYz5BEFOOJJW4AMMYRu0UikldCbP5xXXdmQE2meXZJoadY9oE9WXA
         qhPhacLCZXEftv6FUWya9oygDEAUDwOnjgo09GB/R4kc+gMx7Nv4K9j+YDFJTK3n7PYQ
         zjppUh9eP6ONxuNrCqzQDAhQP8iSzygLBhvc49vjnZpMFOxsOR4yD4KlpmzrBGvnOaTO
         UEIg==
X-Original-Authentication-Results: mx.google.com;       spf=neutral (google.com: 198.23.248.156 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]
X-Received: by 10.182.119.229 with SMTP id kx5mr6412591obb.23.1376123944234;
        Sat, 10 Aug 2013 01:39:04 -0700 (PDT)
X-Forwarded-To: [[email protected] removed]
X-Forwarded-For: [[email protected] removed] [[email protected] removed]
Delivered-To: [[email protected] removed]
Received: by 10.182.128.229 with SMTP id nr5csp8850obb;
        Sat, 10 Aug 2013 01:39:03 -0700 (PDT)
X-Received: by 10.68.189.194 with SMTP id gk2mr1027310pbc.194.1376123943050;
        Sat, 10 Aug 2013 01:39:03 -0700 (PDT)
Return-Path: <[email protected]>
Received: from us4.networkpanda.com ([198.23.248.156])
        by mx.google.com with ESMTPS id ie10si14857047pbc.251.2013.08.10.01.39.02
        for <[[email protected] removed]>
        (version=TLSv1 cipher=RC4-SHA bits=128/128);
        Sat, 10 Aug 2013 01:39:03 -0700 (PDT)
Received-SPF: neutral (google.com: 198.23.248.156 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=198.23.248.156;
Received: from topseo80 by us4.networkpanda.com with local (Exim 4.80.1)
	(envelope-from <[email protected]>)
	id 1V84hR-000004-MM
	for [[email protected] removed]; Sat, 10 Aug 2013 08:39:01 +0000
To: [[email protected] removed]
Subject: VPS Hosting Services Providers
From: [email protected]
Reply-To: [email protected]
X-Mailer: NotOneBit.com Simple Mailer
Message-Id: <[email protected]>
Date: Sat, 10 Aug 2013 08:39:01 +0000
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - us4.networkpanda.com
X-AntiAbuse: Original Domain - gmail.com
X-AntiAbuse: Originator/Caller UID/GID - [764 765] / [47 12]
X-AntiAbuse: Sender Address Domain - us4.networkpanda.com
X-Get-Message-Sender-Via: us4.networkpanda.com: authenticated_id: topseo80/only user confirmed/virtual account not confirmed

Dear Sir,

I am Hemant Bansal, Business development executive. We are providing quality VPS hosting for websites.

If your website is grown up or not running smoothly, we can provide you quality Virtual private server (VPS) hosting for Rs 800/- only.

In VPS you will get all the features of a dedicated server for fraction of a dedicated server cost. You will get full root access, can host unlimited domains, unlimited email ids. You can install any software which need root access and can set any configuration setting as per your need.

If you are suffering in shared hosting because other sites in shared server are using too much resources or facing problem in email due to your neighbour site is spamming. We recommend you to switch to VPS hosting. A VPS will give you complete independence and lots of room to grow your site.

I will really appreciate if you please let me know your VPS requirement.

We are also providing Reseller and shared hosting.


Warm Regards

Hemant Bansal


P.S. To stop receiving further mail please reply with "Remove" in the subject line.

I went on the trail of this spammer.

Looking at the Original Message text in Gmail, gives me:

X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - us4.networkpanda.com
X-AntiAbuse: Original Domain - gmail.com
X-AntiAbuse: Originator/Caller UID/GID - [764 765] / [47 12]
X-AntiAbuse: Sender Address Domain - us4.networkpanda.com
X-Get-Message-Sender-Via: X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - us4.networkpanda.com
X-AntiAbuse: Original Domain - gmail.com
X-AntiAbuse: Originator/Caller UID/GID - [764 765] / [47 12]
X-AntiAbuse: Sender Address Domain - us4.networkpanda.com
X-Get-Message-Sender-Via: us4.networkpanda.com: authenticated_id: topseo80/only user confirmed/virtual account not confirmed: authenticated_id: topseo80/only user confirmed/virtual account not confirmed

Accordingly I trace the mail server: us4.networkpanda.com

#ping us4.networkpanda.com
PING us4.networkpanda.com (192.227.129.118) 56(84) bytes of data.
64 bytes from host.colocrossing.com (192.227.129.118): icmp_req=1 ttl=48 time=82.4 ms

Now I need to locate the company to whom this IP block has been delegated.

I visit http://whois.arin.net/rest/net/NET-192-227-129-112-1/pft

It gives me the following details:

WHOIS-RWS

Network
NetRange	192.227.129.112 - 192.227.129.127
CIDR	192.227.129.112/28
Name	CC-192-227-129-112-28
Handle	NET-192-227-129-112-1
Parent	CC-12 (NET-192-227-128-0-1)
Net Type	Reallocated
Origin AS	AS36352
Organization	Green Value Hosting, Inc. (GVH-7)
Registration Date	2013-07-28
Last Updated	2013-07-28
Comments	
RESTful Link	http://whois.arin.net/rest/net/NET-192-227-129-112-1
See Also	Related organization's POC records.
See Also	Related delegations.


Organization
Name	Green Value Hosting, Inc.
Handle	GVH-7
Street	1600 Ironwood Dr
City	Normal
State/Province	IL
Postal Code	61761
Country	US
Registration Date	2013-07-15
Last Updated	2013-07-15
Comments	
RESTful Link	http://whois.arin.net/rest/org/GVH-7
Function	Point of Contact
NOC	JONAT8-ARIN (JONAT8-ARIN)
Admin	JONAT8-ARIN (JONAT8-ARIN)
Tech	JONAT8-ARIN (JONAT8-ARIN)
Abuse	JONAT8-ARIN (JONAT8-ARIN)


Point of Contact
Name	Jonathan , Nguyen
Handle	JONAT8-ARIN
Company	Green Value Hositng, Inc.
Street	6 Copps Hill Road
City	Windham
State/Province	NH
Postal Code	03087
Country	US
Registration Date	2013-07-13
Last Updated	2013-07-20
Comments	
Phone	+1-603-339-2886 (Office)
Email	[email protected]
[email protected]
RESTful Link	http://whois.arin.net/rest/poc/JONAT8-ARIN

Now I got the following details. The block was delegated to Greenvalue Hosts, which is a known provider in Webhostingtalk circles. I also got the email details of their abuse department.

Another detail that I got from the mail itself is the mailserver name: us4.networkpanda.com

networkpanda.com seems to be a hosting company, which probably rents a dedicated server from Greenvalue hosts. I reported the spam mail to both companies.

If they don’t take action, the next measure is to report it to Colocrossing, the colocator/dedicated server provider, and also ARIN, which is the IP block delegator agency.

Thanks to @INIZ on Lowendtalk.com for help regarding RIPE databases.

Note: If you do a search on ARIN, by default, they take you to this page: http://whois.arin.net/rest/net/NET-192-227-129-112-1 which does not provide full details. You need to suffix ‘/pft’ to the resulting url to get full details. Eg: http://whois.arin.net/rest/net/NET-192-227-129-112-1/pft

As per ARIN’s post:

The next enhancement is a change to the default output on a query initiated by the search box on the web page for IP addresses, organizations, and ASNs. The query for an IP address or network will return the network as well as the full output of related Organization and Point of Contact (POC) data for the network. Likewise, the query result for an AS number will output the associated organization and related POCs along with the AS number. The query result for an organization, will list all related networks and ASNs, and give full output of associated POCs. This will allow you to view all information on a single web page. This “pft” option is an enhancement to the RESTful web interface, and it is not available on port 43. To use it, append “/pft” to the URL, for example:

http://whois.arin.net/rest/org/ARIN/pft
http://whois.arin.net/rest/net/ NET-192-136-136-0-1/pft
Note again that web search forms will default to using the “pft” option.

Response from the Provider

I had contacted the abuse department of networkpanda.com, whose email server us4.networkpanda.com was used for sending spam email. They responded by blocking the user:

Hello Joel, 

Just as a follow up, the user who was authenticating to this Gmail account to send the messages, was now permanently suspended from our servers. But you will also need to report the account [email protected] to Google, as he will be also using other providers to send spam. 

Thank you for reporting this issue. 

You are reading this post on Joel G Mathew’s tech blog. Joel's personal blog is the Eyrie, hosted here.