Oli.jp

Articles…

Restart-ready SMF manifest for RadiantCMS on Joyent

While I should have done this years ago, I finally got around to making my weblog restart-ready on a Joyent accelerator, so it’ll come online automatically if it dies. I found the documentation on how to do this pretty confusing, so here’s what I did for other non-sysadmins.

I started with Joyent’s example mongrel_cluster manifest recipe, and changed:

the instance name
instance name='INSTANCE_NAME'instance name='radiant'
the working directory
working_directory='/PATH/TO/RAILS/APP'working_directory='/home/MY_ADMIN_USER/web' (navigate to your app’s root, then use pwd to get the full path)
the user and group
user='USER' group='GROUP'user='MY_ADMIN_USER' group='MY_ADMIN_GROUP'
for me both values were my admin user name, but you can confirm via ls -la in the app’s root and seeing what user:group CMS-created files have
add the app’s root to the path
envvar name="PATH" value="/usr/bin:/bin:/opt/local/bin"envvar name="PATH" value="/usr/bin:/bin:/opt/local/bin:/home/MY_ADMIN_USER/web"
check which mongrel_rails
if necessary change all /opt/csw/bin to /opt/local/bin etc (which mongrel_rails told me /opt/local/bin/mongrel_rails)
add any other dependencies
you may also need to add other software to dependencies, but I didn’t
<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<service_bundle type='manifest' name='mongrel/cluster'>
  <service name='network/mongrel/cluster' type='service' version='0'>
    <dependency
        name='fs'
        grouping='require_all'
        restart_on='none'
        type='service'>
      <service_fmri value='svc:/system/filesystem/local'/>
    </dependency>
    <dependency
        name='net'
        grouping='require_all'
        restart_on='none'
        type='service'>
      <service_fmri value='svc:/network/loopback'/>
      <!-- uncomment the following line if you are on an L+ Accelerator since /home is mounted through nfs -->
      <!-- <service_fmri value='svc:/network/nfs/client'/> -->
    </dependency>
    <dependent
        name='mongrel_multi-user'
        restart_on='none'
        grouping='optional_all'>
      <service_fmri value='svc:/milestone/multi-user'/>
    </dependent>
    <exec_method
        name='start'
        type='method'
        exec='/opt/local/bin/mongrel_rails cluster::start'
        timeout_seconds='60'>
    </exec_method>
    <exec_method
        name='stop'
        type='method'
        exec=':kill'
        timeout_seconds='60'>
    </exec_method>
    <!--
    Define instances
    -->
    <instance name='radiant' enabled='false'>
        <method_context working_directory='/home/MY_ADMIN_USER/web'>
            <method_credential user='MY_ADMIN_USER' group='MY_ADMIN_USER' />
            <method_environment>
              <envvar name="PATH" value="/usr/bin:/bin:/opt/local/bin:/home/MY_ADMIN_USER/web" />
            </method_environment>
        </method_context>
    </instance>
  </service>
</service_bundle>

I then followed the instructions in the old wiki entry How to set up a Rails application with Mongrel and Apache (the new version About the Service Management Facility is a good overview but doesn’t cover this part in detail yet):

  1. get the SMF manifest file onto the server (upload or c&p with nano)
  2. import the manifest: sudo svccfg import radiant-mainfest.xml (validated on import, add path to manifest file if necessary)
  3. kill current mongrel with mongrel_rails cluster::stop (you’ll get stopping port 8001 etc feedback)
  4. confirm the PID files have been removed from /tmp/ (were for me), and delete them if necessary
  5. change to restart-ready SMF version with svcadm enable mongrel/cluster:radiant (where “radiant” was the instance name I’d added to the manifest)
  6. confirm it’s started with svcs -v | grep mongrel (although it should be the last thing listed in the default svcs)

Things didn’t work so smoothly for me the first time, so to trouble-shoot I followed the brief advice at the bottom of the ‘About the Service Management Facility’ page: