Background a long-running Linux task

Background a shell command that’s taking forever to complete, and you don’t want to open up a new ssh session to the host:


Pretty useful, especially if you’re manually running something like a backup process that takes a long time that, if you were to just close the ssh session, would just stop in some unknown state of incompletitude.

VmWare the operation is not allowed in the current state

I had tried to put a host into maintenance mode so I could reboot it. But it seemed to get hung up, but I didn’t see any outstanding tasks, so I just rebooted it. After it came back up, things seemed alright, and I started up the various VMs on it. Later, I started a migration task to change the storage of one of the VMs on that host. At the very end it said “the operation is not allowed in the current state” and failed. It happened again when I tried to deploy an ovf template to that host.

This post from VmWare gave a number of things to try. For me, it was disconnecting the host from vCenter and then reconnecting it (all right-click operations, so it was easy and quick). When it reconnected, the host was shown to be in maintenance mode, which was not the case before disconnecting and reconnecting it. Things seem happy now.

Running a different Perforce version with p4dctl than the latest (16.1 or whatever)

If you want to run Perforce with p4dctl since it’s so handy, but you only have a license for some older version of Perforce, here’s how you can have p4dctl point at that version:

Put the p4d binary into /opt/perforce/sbin/ and name it something descriptive for its version, like “p4d.2012.1”

Change the symbolic link in /etc/alternatives/helix-p4d to point at that instead of the p4d version that installed:

rm /etc/alternatives/helix-p4d
ln -s /opt/perforce/sbin/p4d.2012.1 /etc/alternatives/helix-p4d

Done. You’ve just swapped out the version of p4d that p4dctl uses. But, this is global for the entire machine, so it’s something to consider. I think it’ll auto-upgrade an existing p4 instance, but it can’t go backwards to lder versions, just to mention that. You’ll get weird errors.

Easiest Way to Run Perforce

Install Perforce according to this:

Then run the configure script mentioned in what comes out after the installer ran.

That script essentially just sets up a .conf file in /etc/perforce/p4dctl.conf.d/

You can edit that thing directly or add another one if you want to have another perforce instance running on the same machine.

Note: you cannot use p4dctl with replication, as best as I can tell – it clobbers any “p4 configure” things that may have been set in the master-configured stuff ro the replica.

Start and Stop all perforce instanced mentioned in /etc/perforce/p4dctl.conf.d/:
p4dctl start -a

p4dctl stop -a

Description of Perforce Replication

Here’s a super-simple description of how Perforce replication works:

You replicate from a master p4 instance to one or more replica instances. Each replica instance is just a p4d process running a few “p4 pull” commands pointed at the master via some configs. Those configs are set on the master by saying “replica instances named ‘Replica1’ should run with these configs”. You can have several replica instances on several different hosts all using the same perforce user. They alll just needed to have replayed a checkpoint file and synced over all the versioned files from the master, and they all just needed to be started with a command line arg that said it is named “Replica1” (or whatever name you set up with configs on the master).

Some notes:

Perforce users used for replication need to have ‘super’ permissions (set in p4 protect on the master). They also need to have their ticket timeouts set to ‘unlimited’ or to something super-long, so you don’t have to log in with the user to renew the ticket.

You can do *something* to make a replica behave like a read-only replica that auto-forwards any write operations to the master and waits to respond to the client until after those changes have replicated through to it.

I *believe* you could replay a checkpoint and copy an incomplete set of the versioned files to the replica and things would still work; you would then run ‘p4 verify -t’ on the replica to kick off a giant comparison of every file and have the replica pull down replacement files from the master when it detected an inconsistency or in the case of missing files. So, if I understand it correctly, you could technically just set up a replica with only a checkpoint from the master, and then if you ran ‘p4 verify -t’ the replica would eventually sync up.

Setting up Perforce Replication

This was oddly tricky. There’s not a ton of documentation on the internet about it, at least not that my Google searching turned up. Loads of official Perforce docs, yes, but I kept having problems with those step-by-steps.

Not to fear though. Below is a step-by-step that worked for me. I only tested it on the latest version though, but it seems older versions (after 2012 or so) are the same.

Set up Perforce replication

Master here is at
Replica host is at
Replication service user is named repl_user_for_Replica3
Assumes your perforce installs are in the same directory (/p4root in this case)
Assumes you installed perforce to run under the “perforce” OS user (default from apt-get)
Assumes both your perforce instances use the same case sensitivity and unicode settings

## Set up the perforce repo and update apt

## Do a basic install on the master
apt-get install helix-p4d -y

## On the master, do a basic configuration, setting the p4 root to /p4root
## Don’t use ssl:1666 just type in 1666 for that step

## On the master, set up a user for the replica to use
p4 -p -u super user -f repl_user_for_Replica3
## Add a line at the bottom with “Type: service”

## On the master, set a password for the service account
p4 -p -u super passwd repl_user_for_Replica3

## On the master, set up a group for the service accounts and make login tickets effectively never expire
p4 -p -u super group service_group
## Users: repl_user_for_Replica3
## Timeout: unlimited
## You can add more users by just having one per line
## You should see “Group service_group created.”

## Add super permissions to the service_group group – sounds alarming, but it seems to be required
## super group service_group * /…
p4 -p -u super protect

## Alright, the idea here is to set all the important replication things on the master rather than on the replica
## This way, for the replica we only have to worry about restoring the checkpoint, copying the versioned files, and naming the replica
## This approach will result in the replica automatically pulling everything it needs after merely starting it up
## I believe you could then start up any other p4 instance and merely point it at the master
## Based on its name it will then have all the replciation configs already set up for it to use

## On the master, set the master for the replica, the p4 instance of which we will call “Replica3”
## also set a lot of other variables the replica will pick up and use
p4 -p -u super configure set Replica3#P4TARGET=
p4 -p -u super configure set Replica3#P4LOG=replica3Log.txt
p4 -p -u super configure set Replica3#server=3
p4 -p -u super configure set Replica3#monitor=1
p4 -p -u super configure set “Replica3#startup.1=pull -i 1”
p4 -p -u super configure set “Replica3#startup.2=pull -u -i 1”
p4 -p -u super configure set “Replica3#startup.3=pull -u -i 1”
p4 -p -u super configure set Replica3#db.replication=readonly
p4 -p -u super configure set Replica3#lbr.replication=readonly
p4 -p -u super configure set Replica3#serviceUser=repl_user_for_Replica3

## On the master, checkpoint the master
p4 -p -u super admin checkpoint

## Now, copy the checkpoint.3 (or whatever the biggest number is) and then copy that to the replica’s p4root folder
## Yes, it’s likely a 2-step process, where you scp to /tmp, then ssh into the replica host and move from /tmp to /p4root
## Yes, you will have to mkdir /p4root on the replica
scp /p4root/checkpoint.3 ts@

## Copy all the versioned files from the master to the replica’s p4root folder (same location as on the master, relative to p4root)
## Yes, it’s likely a 2-step process, where you scp to /tmp, then ssh into the replica host and move from /tmp to /p4root
## Yes, you will have to mkdir /p4root on the replica
scp -r /p4root/streamsDepot ts@
## I’m using “streamsDepot” because that was the default folder it created. You may have other folders there; each represents a depot

## On the replica install p4
apt-get install helix-p4d -y

## One the replica, create /p4root
mkdir /p4root

## Move or copy the checkpoint we copied earlier
cp /tmp/checkpoint.1 /p4root

## Move or copy the versioned files we copied earlier
cp -r /tmp/streamsDepot /p4root

## On the replica, make sure the perforce user can act on the files you copied over
chown -R perforce:perforce /p4root

## Change users to the perforce OS user to start up the p4d process
su perforce

## On the replica host restore the checkpoint
p4d -r /p4root -jr /p4root/checkpoint.3
## You should see “Recovering from /p4root/checkpoint.3…”
## You should then see a bunch of files that start with “db.” on /p4root
## After you’re all done, btw, you can remove these checkpoint.3 files

## On the replica host log in to the master p4 instance with the service account so you get a forever-ticket, as we had configured on the master
p4 -u repl_user_for_Replica3 -p login

## You should see a .p4tickets file in the perforce user’s home directory when you do “ls -a” after this.
## If not, check the replica3Log.txt file for indicators of what’s wrong.

## Start the replica, naming it Replica3 and having it listen on port 22222
p4d -r /p4root -In Replica3 -p -d
## Naming it “Replica3” means it’ll pull up the configs you set on the master for “Replica3”

## You should connect to the master, submit some changes and confirm you see those in the replica
## In my case, I just copied and pasted and submitted some random text files in the streamsDepot and verified the versioned files were replicated over

## Check to make sure the replication processes are running on the replica
p4 -p -u super monitor show -a

## I *think* you can just have a cron job running that tries to start up p4d as though you were just starting the replica
## (sledgehammer method of making sure it’s always running… hah)
p4d -r /p4root -In Replica3 -p -d

## Potential improvements:
– have .p4tickets file stored in /p4root instead of the perforce user’s home directory (maybe better?)
– See if there’s any way the replication service group could use something less than “super” privileges
– Monitoring to detect if and when replication goes down or otherwise stops
– Cron job to make sure the replica p4d process is auto-restarted if it stops

## Links

The replica should then start auto-syncing everyting
In replica3Log.txt, it should be very clean:

Perforce server info:
Perforce Server starting 2017/04/11 01:54:54 pid 12986 P4D/LINUX26X86_64/2016.2/1498579.
Perforce server info:
2017/04/11 01:54:54 pid 12987 repl_user_for_Replica2@unknown background [p4d/2016.2/LINUX26X86_64/1498579] ‘pull -i 1’
Perforce server info:
2017/04/11 01:54:54 pid 12988 repl_user_for_Replica2@unknown background [p4d/2016.2/LINUX26X86_64/1498579] ‘pull -u -i 1’
Perforce server info:
2017/04/11 01:54:54 pid 12989 repl_user_for_Replica2@unknown background [p4d/2016.2/LINUX26X86_64/1498579] ‘pull -u -i 1’

The following message in replica3Log.txt means your repl_user_for_Replica2 doesn’t have super permissions
check out the “protect” step earlier; the replication user group needs to have super permissions on everything

Perforce server error:
2017/04/08 18:01:00 pid 1699 service@ background ‘pull -i 1’
Startup command failed: client-Message
2017/04/08 18:01:00 pid 1699 service@ background ‘pull -i 1’
Replica access refused. Ensure that the serverid and service user are correctly configured on the replica. Server Replica1 may not be used by service user service.
Server ‘Replica1’ doesn’t exist.

If you started up p4d on the replica with the wrong IP the replica3Log.txt file will say as much

Long Running linux command and you don’t want to have to keep the terminal window open

in my case it’s a 2+ TB file I’m copying with scp. I can’t realistically expect to keep a putty window open the whole time. Here’s a Stack Overflow post that seems to be working alright: (3rd comment down, by

Create screen
user@server:~$ screen -S bigscptransfer

You’re now in the screen
ser@server:~$ scp bigfile.dat server2:.

Detach from the screen using CTRL+A then push D
[detached from 5899.bigscptransfer]

Resume session when you need it with:
user@server:~$ screen -r bigscptransfer

Installing Perforce on Ubuntu Server 16

Pretty straightforward from here:

wget -qO – | sudo apt-key add –

nano /etc/apt/sources.list.d/perforce.list
….. then add the following line to that file
deb xenial release

apt-get update

apt-get install helix-p4d -y


… in the step where you choose the port, “ssl:1666” means it sets up ssl, which means *everything* has to use ssl configs, which is another layer of complexity. Handy, yes, but more complex.

After that, you can log in with any p4 client. It’s pretty good. You’ll have a new user – perforce – that runs the server, and things are generally in good shape.

Growing a Linux disk to be huge

In my case, I had a template VM in ESXi that I deployed and needed to add tons of disk space to – up to 9TB.

First, I’d suggest using cfdisk instead of fdisk – it’s got a pseudo-UI so you can see it (for those of us who don’t have as robust of a mental abstraction of command line output).

When I tried to write the changes to disk I got a message that since the disk was MBR the max size the volume could be was 2TB. Ummm….. now what?

Well, convert it to GPT of course! This can be a dangerous effort, or so it seams, but since this was a brand new machine deployed from a template, I had nothing to lose. Here’s a step-by-step that worked:

I did this a little over a week ago, but it actually worked the first time. Just make sure you get your “sda2” and “sda3” – or whatever they actually are for you – correct.