The apssh
API¶
Most symbols can be imported directly from the apssh
package, e.g.
from apssh import SshJob
No need to import module apssh.sshjob
here.
The SshProxy
class¶
The SshProxy class models an ssh connection, and is mainly in charge * lazily initializing connections on a need-by-need basis * and reasembling lines as they come back from the remote
- class apssh.sshproxy.SshProxy(hostname, *, username=None, gateway=None, keys=None, known_hosts=None, port=22, formatter=None, verbose=None, debug=False, timeout=30)[source]¶
A proxy essentially wraps an ssh connection. It can connect to a remote, and then can run several commands in the most general sense, i.e. including file transfers.
- Parameters:
hostname – remote hostname
username – remote login name
gateway (SshProxy) – when set, this node is then used as a hop for creating a 2-leg ssh connection.
formatter – each SshProxy instance has an attached formatter that is in charge of rendering the output of the various commands. The default is to use an instance of
HostFormatter
, that outputs lines of the formhostname:actual-output
verbose – allows to get some user-level feedback on ssh negociation. Permission denied messages and similar won’t show up unless verbose is set.
- async connect_lazy()[source]¶
Connects if needed - uses a lock to make it safe for several coroutines to simultaneously try to run commands on the same SshProxy instance.
- Returns:
connection object
- async get_file_s(remotepaths, localpath, **kwds)[source]¶
Retrieve a collection of remote files locally into the same directory. The ssh connection and SFTP subsystem are created and set up if needed.
- Parameters:
remotepaths (list) – remote files to retrieve
localpath – where to store them
kwds – passed along to the underlying asyncssh’s sftp client, typically:
preserve
,recurse
andfollow_symlinks
are honored like in http://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SFTPClient.get
- Returns:
True if all went well, or raise exception
- async mkdir(remotedir)[source]¶
Create a remote directory if needed.
- Parameters:
remotedir (str) – remote repository to create.
- Returns:
True if remote directory existed or could be created, False if SFTP subsystem could not be set up.
- Raises:
asyncssh.sftp.SFTPError –
- async put_file_s(localpaths, remotepath, **kwds)[source]¶
Copy a collection of local files remotely into the same directory. The ssh connection and SFTP subsystem are created and set up if needed.
- Parameters:
localpaths (list) – files to copy
remotepath (str) – where to copy
kwds – passed along to the underlying asyncssh’s sftp client, typically:
preserve
,recurse
andfollow_symlinks
are honored like in http://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SFTPClient.put
- Returns:
True if all went well, or raise exception
- async put_string_script(script_body, remotefile, **kwds)[source]¶
A convenience for copying over a local script before remote execution. The ssh connection and SFTP subsystem are created and set up if needed. Resulting remote file has mode 755.
- Parameters:
script_body (str) – the contents of the script to create WARNING this is not a filename.
remotefile – filename on the remote end
kwds – passed along to http://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SFTPClient.open i.e. for setting
encoding
orerrors
.
- Returns:
True if all went well, or raise exception
- async run(command, **x11_kwds)[source]¶
Run a command, and write its output on the fly according to instance’s formatter.
- Parameters:
command – remote command to run
x11_kwds – optional keyword args that will be passed to create_session, like typically
x11_forwarding=True
- Returns:
remote command exit status - or None if nothing could be run at all
Command classes (Run*
, Push
, Pull
)¶
The commands
module implements all the command classes, typically
Run
, RunScript
, Pull
, and similar classes.
- class apssh.commands.AbstractCommand(*, label=None, allowed_exits=None)[source]¶
Abstract base class for all command classes.
- Parameters:
label – optional label used when representing a scheduler textually or graphically
allowed_exits – the default is to only allow the command to exit(0). Using
allowed_exits
, one can whitelist a set of exit codes or signals. If the command returns one of these codes, or receives one of these signals, it is deemed to have completed successfully. A retcod 0 is always allowed.
Examples
allowed_exits=["TERM", 4]
would allow the command to either return exit code4
, or to end after receiving signal ‘TERM’. Refer to the POSIX documentation for signal names, like QUIT or ALRM.Note
allowed_exits
is typically useful when a command starts a process that is designed to be killed by another command later in the scheduler.- async co_run_local(localnode)[source]¶
Needs to be redefined on actual command classes that want to support running on a
LocalNode
as well.- Returns:
Should return 0 if everything is fine.
- class apssh.commands.CapturableMixin(capture)[source]¶
this class implements the simple logic for capturing a command output
NOTE. it relies on the presence of the self.node attribute that points back at the SshNode where this command is going to run; which is set by the SshJob class
- class apssh.commands.Pull(remotepaths, localpath, *args, label=None, verbose=False, **kwds)[source]¶
Retrieve remote files and stores them locally
- Parameters:
remotepaths – a collection of remote paths to be retrieved.
localpath – the local directory where to store resulting copies.
label – if set, is used to describe the command in scheduler graphs.
verbose (bool) – be verbose.
kwds – passed as-is to the SFTPClient get method.
See also: http://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SFTPClient.get
- class apssh.commands.Push(localpaths, remotepath, *args, label=None, verbose=False, **kwds)[source]¶
Put local files onto target node
- Parameters:
localpaths – a collection of local filenames to be copied over to the remote end.
remotepath – the directory where to store copied on the remote end.
label – if set, is used to describe the command in scheduler graphs.
verbose (bool) – be verbose.
kwds – passed as-is to the SFTPClient put method.
See also: http://asyncssh.readthedocs.io/en/latest/api.html#asyncssh.SFTPClient.put
- class apssh.commands.Run(*argv, verbose=False, x11=False, ignore_outputs=False, label=None, allowed_exits=None, capture=None)[source]¶
The most basic form of a command is to run a remote command
- Parameters:
argv – the parts of the remote command. The actual command run remotely is obtained by concatenating the string representation of each argv and separating them with a space.
label – if set, is used to describe the command in scheduler graphs.
verbose (bool) – if set, the actual command being run is printed out.
x11 (bool) – if set, will enable X11 forwarding, so that a X11 program running remotely ends on the local DISPLAY.
ignore_outputs (bool) – this flag is currently used only when running on a LocalNode(); in that case, the stdout and stderr of the forked process are bound to /dev/null, and no attempt is made to read them; this has turned out a useful trick when spawning port-forwarding ssh sessions
Examples
Remotely run
tail -n 1 /etc/lsb-release
Run("tail -n 1 /etc/lsb-release")
The following forms are exactly equivalent:
Run("tail", "-n", 1, "/etc/lsb-release") Run("tail -n", 1, "/etc/lsb-release")
- class apssh.commands.RunLocalStuff(args, *, label=None, allowed_exits=None, includes=None, remote_basename=None, x11=False, verbose=False, ignore_outputs=False, capture=None)[source]¶
The base class for
RunScript
andRunString
. This class implements the common logic for a local script that needs to be copied over before being executed.- Parameters:
args – the argument list for the remote command
label – if set, is used to describe the command in scheduler graphs.
includes – a collection of local files that need to be copied over as well; get copied in the same directory as the remote script.
verbose – print out more information if set; this additionnally causes the remote script to be invoked through
bash -x
, which admittedly is totally hacky. xxx we need to remove this.remote_basename – an optional name for the remote copy of the script.
Local commands are copied in a remote directory - typically in
~/.apssh-remote
.Also, all copies are done under a name that contains a random string to avoid collisions. This is because two parallel runs of the same command would otherwise be at risk of one overwriting the remote command file, while the second tries to run it, which causes errors like this:
fit26: .apssh-remote/B3.sh: /bin/bash: bad interpreter: Text file busy
- async co_install(node, remote_path)[source]¶
Abstract method to explain how to remotely install a local script before we can invoke it
- async co_run_remote(node)[source]¶
Implemented to satisfy the requirement of
AbstractCommand
. The common behaviour for both classes is to first invokeco_install()
to push the local material over; it should raise an exception in case of failure.
- class apssh.commands.RunScript(local_script, *args, label=None, allowed_exits=None, includes=None, x11=False, verbose=False, capture=None)[source]¶
A class to run a local script file on the remote system, but with arguments passed exactly like with Run
- Parameters:
local_script – the local filename for the script to run remotely
args – the arguments for the remote script; like with
Run
, these are joined with a space characterlabel – if set, is used to describe the command in scheduler graphs.
includes – a collection of local files to be copied over in the same location as the remote script, i.e. typically in
~/.apssh-remote
x11 (bool) – allows to enable X11 x11_forwarding
verbose – more output
Examples
Run a local script located in
../foo.sh
with specified args:RunScript("../foo.sh", "arg1", 2, "arg3")
or equivalently:
RunScript("../foo.sh", "arg1 2", "arg3")
- class apssh.commands.RunString(script_body, *args, label=None, allowed_exits=None, includes=None, x11=False, remote_name=None, verbose=False, capture=None)[source]¶
Much like RunScript, but the script to run remotely is expected to be passed in the first argument as a python string this time.
- Parameters:
script_body (str) – the contents of the script to run remotely.
args – the arguments for the remote script; like with
Run
, these are joined with a space characterlabel – if set, is used to describe the command in scheduler graphs.
includes – a collection of local files to be copied over in the same location as the remote script, i.e. typically in
~/.apssh-remote
x11 (bool) – allows to enable X11 x11_forwarding
remote_name – if provided, will tell how the created script should be named on the remote node; it is randomly generated if not specified by caller.
verbose – more output
Examples
Here’s how to call a simple bash wrapper remotely:
myscript = "#!/bin/bash\nfor arg in "$@"; do echo arg=$arg; done" scheduler.add( RunString(myscript, "foo", "bar", 2, "arg3", remote_name = "echo-args.sh"))
- class apssh.commands.StrLikeMixin[source]¶
the various Run* classes need to look like a str object for some operations, like minimally the following dunder methods
this is needed for the deferred operation mode, where command objects need to remain as Deferred objects and not str, as that would imply early evaluation
Formatter
classes¶
A formatter is a class that knows how to deal with the stdout/stderr lines as they come back from a ssh connection.
In its capture form, it allows to retain this output in memory instead of printing on the fly.
- class apssh.formatters.Formatter(custom_format)[source]¶
This class is an abstract class that allows to describe how to handle the incoming text from a remote command, as well as various events pertaining to an
SshProxy
.This object is expected to be created manually outside of the
SshProxy
logic.Examples of predefined formatters:
TerminalFormatter
: prints out line based on a format (time, hostname, actual line…).RawFormatter
: shortcut forTerminalFormatter("{linenl}")
.HostFormatter
: shortcut forTerminalFormatter("{host}:{linenl}")
.SubdirFormatter
: stores in<subdir>/<hostname>
all outputs from that host.CaptureFormatter
: stores flow in-memory instead of printing on the fly.
- class apssh.formatters.TerminalFormatter(custom_format, verbose)[source]¶
Use
print()
to render raw lines as they come. Remote stdout goes to stdout of course. Remote stderr goes to stderr. If theverbose
attribute is set, additional ssh-related events, like connection open and similar, are also issued on stderr.- Parameters:
custom_format – a string that describes the format used to print out incoming lines, see below.
verbose – when set, additional information get issued as well, typically pertaining to the establishment of the ssh connection.
The
custom_format
attribute can contain the following keywords, that are expanded when actual traffic occurs.{linenl}
the raw contents as sent over the wire{line}
like {linenl} but without the trailing newline{nl}
a litteral newline{fqdn}
the remote hostname{host}
the remote hostname (short version, domain name stripped){user}
the remote username%H
and similar time-oriented formats, applied to the time of local reception; refer to strftime for a list of supported formats.{time}
is a shortcut for"%H-%M-%S"
.
- class apssh.formatters.SubdirFormatter(run_name, *, verbose=True)[source]¶
This class allows to store remote outputs on the filesystem rather than on the terminal, using the remote hostname as the base for the local filename.
With this class, the remote stdout, stderr, as well as ssh events if requested, are all merged in a single output file, named after the hostname.
- Parameters:
run_name – the name of a local directory where to store the resulting output; this directory is created if needed.
verbose – allows to see ssh events in the resulting file.
Examples
If
run_name
is set toprobing
, the session for hostfoo.com
will end up in fileprobing/foo.com
.
- class apssh.formatters.CaptureFormatter(custom_format='{linenl}', verbose=True)[source]¶
This class allows to capture remote output in memory. For now it just provides options to start and get a capture.
Examples
To do a rough equivalent of bash’s:
captured_output=$(ssh remote.foo.com cat /etc/release-notes)
You would do this:
s = Scheduler() f = CaptureFormatter() n = SshNode('remote.foo.com', formatter=f) s.add(SshJob(node=n, command="cat /etc/release-notes")) f.start_capture() s.run() captured = f.get_capture()
The Service
class¶
The service module defines the Service
helper class.
- class apssh.service.Service(command, *, service_id, tty=False, systemd_type='simple', environ=None, stop_if_running=True, verbose=False)[source]¶
The
Service
class is a helper class, that allows to deal with services that an experiment scheduler needs to start and stop over the course of its execution. It leveragessystemd-run
, which thus needs to be available on the remote box.Typical examples include starting and stopping a netcat server, or a tcpdump session.
A
Service
instance is then able to generate a Command instance for starting or stopping the service, that should be inserted in an SshJob, just like e.g. a usual Run instance.- Parameters:
command (str) – the command to start the service; a
Deferred
instance is acceptable tooservice_id (str) – this mandatory id is passed to
systemd-run
to monitor the associated transient service; should be unique on a given host, in particular so thatreset-failed
can work reliablytty (bool) – some services require a pseudo-tty to work properly
systemd_type (str) – a systemd service unit can have several values for its
type
setting, depending on the forking strategy implemented in the main command. The default used inService
issimple
, which is correct for a command that hangs (does not fork or go in the background). If on the contrary the command already handles forking, then it may be appropriate to use theforking
systemd type instead. Refer to systemd documentation for more details, at https://www.freedesktop.org/software/systemd/man/systemd.service.html#Type=environ – a dictionary that defines additional environment variables to be made visible to the running service. In contrast with what happens with regular Run commands, processes forked by systemd have a very limited set of environment variables defined - typically only
LANG
andPATH
. If your program rely on, for example, theUSER
variable to be defined as well, you may specify it here, for exampleenviron={'USER': 'root'}
stop_if_running – by default, prior to starting the service using systemd-run,
start_command
will ensure that no service of that name is currently running; this is especially useful when running the same experiment over and over, if you cannot be sure that your experiment code properly stops that service. Setting this attribute toFalse
prevents this behaviour, and in that case the start_command will issue a mere invokation ofsystemd-run
.
Example
To start a remote service that triggers a tcpdump session:
service = Service( "tcpdump -i eth0 -w /root/ethernet.pcap", service_id='tcpdump', tty=True) SshJob( remotenode, commands=[ Run(service.start_command()), ], scheduler=scheduler, ) # and down the road when you're done SshJob( remotenode, commands=[ Run(service.stop_command()), ], scheduler=scheduler, )
- start_command(*, label=None, **kwds)[source]¶
- Returns:
a Run instance suitable to be inserted in a SshJob object
Deferred evaluation classes¶
Support for deferred evaluation; typical use case is, you want to write something like:
somevar=$(ssh nodename some-command)
ssh othernode other-command $somevar
but because a Scheduler is totally created before it gets to run anything, creating a
Run()
instance from a string means that the string must be known at sheduler-creation
time, at which point we do not yet have the value of somevar
that’s where Deferred
objects come in; they fill in for actual str
objects, but
are actually templates that are rendered later on when the command is actually about to
trigger
typically in a kubernetes-backed scenario, we often need to get a pod’s name by issuing an ssh command to the master node, so this is not a static data that can be filled in the code
- class apssh.deferred.Capture(varname, variables)[source]¶
this class has no logic in itself, it is only a convenience so that one can specify where a Run command should store it’s captured output
for example a shell script like:
somevar=$(ssh nodename some-command) ssh othernode other-command $somevar
could be mimicked with (simplified version):
env = Variables() Sequence( SshJob(node_obj, # the output of this command ends up # as the 'foobar' variable in env commands=Run("some-command", capture=Capture('somevar', env))) SshJob(other_node_obj, # which we use here inside a jinja template commands=Run(Deferred("other-command {{somevar}}", env)))
- class apssh.deferred.Deferred(template, variables)[source]¶
the
Deferred
class is the trick that lets you introduce what we call deferred evaluation in a scenario; main use case being when you run a remote command to compute something, that in turn is used later on by another Run or Service object; except that, because the scheduler and its jobs/commands pieces are created before it gets run, you cannot compute all the details right away, you need to have some parts replaces later on - that is, deferred- Parameters:
template (str) – a Jinja template as a string, that may contain variables or expressions enclosed in
{{}}
variables (Variables) – an environment object that will collect values over time, so that variables in
{{}}
can be expansed when the time comes
a
Deferred
object can be used to create instances of theRun
class and its siblings, or of theService
class; this is useful when the command contains a part that needs to be computed during the scenarioWarning
beware of f-strings !
since Jinja templates use double brackets as delimiters for expressions, it is probably unwise to create a template from an f-string, or if you do you will have to insert variable inside quadruple brackets like so {{{{varname}}}}, so that after f-string evaluation a double bracket remains.
- class apssh.deferred.Variables[source]¶
think of this class as a regular namespace, i.e. a set of associations variable → value
we cannot use the regular Python, binding because at the time where a Scheduler gets built, those variables are not yet available
so the
Variables
object typically collects values that are computed during a scheduler runjust like a JS object, a
Variables
object can be accessed through indexing or attributes all the same,so that:
variables = Variables() variables.foo = 'bar' variables['bar'] = 'foo' variables.foo == variables['foo'] # True variables.var == variables['bar'] # True
it is common to create a single
Variables
environment for aScheduler
run; variables inside the environment are often set by creatingRun
-like objects with aCapture
instance that specifies in what variable the result should end up
YAML loader¶
the YamlLoader class - how to create objects in YAML
- class apssh.yaml_loader.YamlLoader(filename)[source]¶
The YamlLoader class builds a Scheduler object from a yaml file
In addition to using the regular YAML syntax (current implementation uses pyyaml, which supports YAML v1.1) the input can optionnally pass through Jinja2 templating; to that end, call the
load*
methods with a non-emptyenv
parameter, that will specify templating variables- Parameters:
filename (str) – the input file (can be a Path as well)
Example
a simple example can be found in github repo fit-r2lab/demos where the same script is written
<https://github.com/fit-r2lab/r2lab-demos/tree/master/my-first-nepi-ng-script>
- load(env=None, *, save_intermediate=None)[source]¶
parse input filename and returns a Scheduler object; a shortcut to using load_with_maps() and trashing the intermediary maps
same parameters as load_with_maps
- Return type:
Scheduler
- load_with_maps(env=None, *, save_intermediate=None)[source]¶
parse input filename
- Parameters:
env (dict) – if not empty, a Jinja2 pass is performed on the input
save_intermediate – defaults to None, meaning do nothing; if provided, this parameter means to save the output of the jinja templating phase, typically for debugging purposes; if set to True, the output filename is computed from the object’s filename as provided at constructor-time; alternatively you may also pass a string, or a Path instance. If env is None, this parameter is ignored.
- Returns:
(*) nodes_map, a dictionary linking ids to SshNode instantes (*) jobs_map, a dictionary linking ids to Job instances (*) the resulting scheduler
- Return type:
a tuple containing
Utilities¶
- apssh.topology.close_ssh_in_scheduler(scheduler, manage_gateways=True)[source]¶
Convenience: synchroneous version of
co_close_ssh_in_scheduler()
.- Parameters:
manage_gateways (bool) – passed as-is
- async apssh.topology.co_close_ssh_in_scheduler(scheduler, manage_gateways=True)[source]¶
This utility function allows to close all ssh connections involved in a scheduler.
Its logic is to find all SshNode instances referred in the jobs contained in the scheduler, nested schedulers included. All the attached ssh connections are then closed, starting with the remotest ones.
- Parameters:
manage_gateways (bool) – when this parameter is False, all the nodes that appear in at least one job are considered. If it is True, then in addition to that, all the nodes that appear as a gateway of a node in that first set are considered as well.
- apssh.topology.topology_as_dotfile(scheduler, filename)[source]¶
Convenience function to store a dot file from a schedulerself.
- Parameters:
scheduler –
filename – output filename
- apssh.topology.topology_as_pngfile(scheduler, filename)[source]¶
Convenience wrapper that creates a png file.
- Parameters:
scheduler –
filename – output filename, without the
.png
extension
- Returns:
created file name
Notes
This actually uses the binary dot program.
A file named as the output but with a
.dot
extension is created as an artefact by this method.
- apssh.topology.topology_dot(scheduler)[source]¶
Computes the relationship between nodes and gateways, for a given scheduler.
- Returns:
a string in DOT format.
- Return type:
str
- apssh.topology.topology_graph(scheduler)[source]¶
Much like
Scheduler.graph()
inasynciojobs
, this convenience function creates a graphviz graph object, that can be used to visualize the various nodes and gateways present in a scheduler, through the relationship: x is used as a gateway to reach y- Returns:
a graph
- Return type:
graphviz.Digraph
This method is typically useful in a Jupyter notebook, so as to visualize a topology in graph format - see http://graphviz.readthedocs.io/en/stable/manual.html#jupyter-notebooks for how this works.
The dependency from
apssh
tographviz
is limited to this function, andtopology_as_pngfile
as these are the only places that need that library, and as installinggraphviz
can be cumbersome.For example, on MacOS I had to do both:
brew install graphviz # for the C/C++ binary stuff pip3 install graphviz # for the python bindings
nepi-ng
node classes¶
The SshNode
and LocalNode
classes are designed
as companions to the SshJob
class, that need
a node
attribute to describe on which node to run commands.
- class apssh.nodes.LocalNode(formatter=None, verbose=None)[source]¶
For convenience and consistency, this class can be used as the
node
attribute of aSshJob
object, so as to define a set of commands to run locally.- Parameters:
formatter – a formatter instance, default to an instance of
HostFormatter
;verbose – if provided, passed to the formatter instance
Examples
To create a job that runs 2 commands locally:
SshJob(node=LocalNode(), commands = [ Run("cat /etc/motd"), Run("sleep 10"), ])
Note
- Not all command classes support running on a local node, essentially
this is only available for usual
Run
commands as of this writing.
- class apssh.nodes.SshNode(hostname, *, username=None, keys=None, **kwds)[source]¶
An instance of SshNode typically is needed to create a
apssh.sshjob.SshJob
instance, that defines a batch of commands or file transfers to run in sequence on that node.Examples
A typical usage to create a job that runs 2 commands remotely:
remote_node = SshNode('remote.foo.com', username='tutu') SshJob(node=remote_node, commands = [ Run("cat /etc/motd"), Run("sleep 10"), ])
This class is a very close specialization of the
SshProxy
class. The only difference are in the handling of default values at build time.- Parameters:
hostname – remote node’s hostname
username – defaults to
root
if unspecified, note thatSshProxy
’s default is to use the local username insteadkeys – filenames for the private keys to use when authenticating; the default policy implemented in this class is to first use the keys currently loaded in the ssh agent. If none can be found this way, SshNode will attempt to import the default ssh keys located in
~/.ssh/id_rsa
and~/.ssh/id_dsa
.kwds – passed along to the
SshProxy
class.
nepi-ng
job classes¶
The SshJob
class is a specialization of asynciojobs
’ AbstractJob
class. It allows to group operations (commands & file transfers)
made in sequence on a given remote (and even local for convenience) node.
- exception apssh.sshjob.CommandFailedError[source]¶
The exception class raised when a command that is part of a critical SshJob instance fails.
This is turn is designed to cause the abortion of the surrounding scheduler.
- class apssh.sshjob.SshJob(node, *, command=None, commands=None, keep_connection=False, verbose=None, forever=None, critical=None, **kwds)[source]¶
A subclass of asynciojobs’s AbstractJob object that is set to run a command, or list of commands, on a remote node specified by a
SshNode
object.- Parameters:
node – an
SshNode
instance that describes the node where the attached commands will run, or the host used for file transfers for commands like e.g.Pull
. It is possible to use aLocalNode
instance too, for running commands locally, although some types of commands, like precisely file transfers, do not support this.command – an alias for
commands
commands –
An ordered collection of commands to run sequentially on the reference node. for convenience, you can set either
commands
orcommand
, both forms are equivalent, but you need to make sure to give exactly one of both.commands
can be set as either in a variety of ways:(1) a list/tuple of
AbstractCommand
objects, e.g.:commands = [ Run(..), RunScript(...), ..]
(2) a single instance of
AbstractCommand
, e.g.:commands = RunScript(...)
(3) a list/tuple of strings, in which case a single
Run
object is created, e.g.:commands = [ "uname", "-a" ]
(4) a single string, here again a single
Run
object is created, e.g.:commands = "uname -a"
Regardless, the commands attached internally to a
SshJob
objects are always represented as a list ofAbstractCommand
instances.verbose – if set to a non-None value, it is used to set - and possibly override - the verbose value in all the command instances in the job.
keep_connection – if set, this flag prevents
co_shutdown
, when sent to this job instance by the scheduler upon completion, from closing the connection to the attached node.forever – passed to AbstractJob; default is
False
, which may differ from the one adopted in asynciojobs.critical – passed to AbstractJob; default is
True
, which may differ from the one adopted in asynciojobs.kwds – passed as-is to AbstractJob; typically useful for setting
required
andscheduler
at build-time.
- async close()[source]¶
Implemented as part of the AbstractJob protocol.
Default behaviour is to close the underlying ssh connection, that is to say the attached node object, unless
keep_connection
was set, in which case no action is taken.- Returns:
None
- async co_run()[source]¶
This method is triggered by a running scheduler as part of the AbstractJob protocol. It simply runs all commands sequentially.
If any of the commands fail, then the behavious depends on the job’s
critical
flag:if the job is not critical, then all the commands are triggered no matter what, and the return code reflects that something went wrong by reporting the last failing code;
if the job is critical on the other hand, then the first failing command causes co_run to stop abruptly and to throw an exception, that in turn will cause the surrounding scheduler execution to abort immediately.
- Returns:
0 if everything runs fine, the faulty return code otherwise.
- Return type:
int
- Raises:
CommandFailedError – in the case where the object instance is defined as
critical
, and if one of the commands fails, an exception is raised, which leads the running scheduler to aborting abruptly.
- async co_shutdown()[source]¶
Implemented as part of the AbstractJob protocol.
Default behaviour is to close the underlying ssh connection, that is to say the attached node object, unless
keep_connection
was set, in which case no action is taken.- Returns:
None
- graph_label()[source]¶
This method customizes rendering of this job instance for calls to its Scheduler’s
graph()
orexport_as_dotfile()
methods.Relies on each command’s
label_line()
method
Tools to deal with keys¶
Basic tools for loading ssh keys from the user space or the agent
- apssh.keys.import_private_key(filename)[source]¶
This functions attempts to import a private key from its filename. It will prompt for a password if needed.
- Parameters:
filename – the local path to the private key
- Returns:
a (asyncssh) SSHKey object if successful, or None
- apssh.keys.load_agent_keys(agent_path=None)[source]¶
The ssh-agent is a convenience tool that aims at easying the use of private keys protected with a password. In a nutshell, the agent runs on your local computer, and you trust it enough to load one or several keys into the agent once and for good - and you provide the password at that time.
Later on, each time an ssh connection needs to access a key, the agent can act as a proxy for you and pass the key along to the ssh client without the need for you to enter the password.
The
load_agent_keys
function allows your python code to access the keys currently knwns to the agent. It is automatically called by theSshNode
class if you do not explicit the set of keys that you plan to use.- Parameters:
agent_path – how to locate the agent; defaults to env. variable $SSH_AUTH_SOCK
- Returns:
a list of SSHKey keys from the agent
Note
Use the command
ssh-add -l
to inspect the set of keys currently present in your agent.
- apssh.keys.load_private_keys(command_line_keys=None, verbose=False)[source]¶
A utility that implements a default policy for locating private keys.
- Parameters:
command_line_keys – a collection of local filenames that should contain private keys; this should correspond to keys that a user has explicitly decided to use through a command-line option or similar;
verbose – gives more details on what is going on.
This function is used both by the apssh binary, and by the
SshNode
class. Here’s for example how apssh locates private keys:If no keys are given as the
command_line_keys
parameter (typically through the apssh -k command line option), then:
- 1.a if an ssh agent can be reached using the SSH_AUTH_SOCK
environment variable, and offers a non-empty list of keys,
apssh
will use the keys loaded in the agent
- 1.b otherwise, apssh will use
~/.ssh/id_rsa
and~/.ssh/id_dsa
if they exist
If keys are specified on the command line
2.c That exact list is used for loading private keys
Note
Use
ssh-add
for managing the keys known to the agent.