Backup MX setup - alternative to db?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Backup MX setup - alternative to db?

CSS-4
Hi all,

I have a handful of personal domains that I host myself - both as a place to experiment a bit (I roll new things out here before using them on paying clients), and a place to play with things that don’t scale well.  As of now, I just have a single MXer with a pretty standard Postfix setup.  Domain/user maps are all in mysql.

I just grabbed a few VPSs since they are cheap and I wanted to try out Vultr.com.  I bought the smallest possible - only 512MB of RAM.  I’m running nsd for DNS services (found setting up two small VPS’s to be cheaper, more fun than paying for secondary NS), and I’d like to add backup MX to both hosts. I do NOT want to run mysql or anything else that’s a memory pig on these.

My idea to get my lookup maps in place is just to write a small perl script that dumps my config info from mysql into flat files, uses scp to copy the files over to the backup MXers, and then runs postmap on the output on the backup MXers.  Before I go ahead with this, any clever options that I’m overlooking to have the same data on servers using different backing stores for the maps?

Thanks,

Charles
Reply | Threaded
Open this post in threaded view
|

Re: Backup MX setup - alternative to db?

lists@lazygranch.com
I've never used rsync in daemon mode (if that is the right way to phrase it), but wouldn't that do everything automatically? 

I know on Digital Ocean you can use a special network between "droplets" (VMs) that is local. There is no transit cost. Perhaps Vultr does the same thing.‎ 

Vultr has a free DNS.  

If I wasn't running FreeBSD, I'd probably be on Linode.
https://www.vpsbenchmarks.com/





  Original Message  
From: CSS
Sent: Friday, April 28, 2017 12:49 PM
To: Postfix users
Subject: Backup MX setup - alternative to db?

Hi all,

I have a handful of personal domains that I host myself - both as a place to experiment a bit (I roll new things out here before using them on paying clients), and a place to play with things that don’t scale well. As of now, I just have a single MXer with a pretty standard Postfix setup. Domain/user maps are all in mysql.

I just grabbed a few VPSs since they are cheap and I wanted to try out Vultr.com. I bought the smallest possible - only 512MB of RAM. I’m running nsd for DNS services (found setting up two small VPS’s to be cheaper, more fun than paying for secondary NS), and I’d like to add backup MX to both hosts. I do NOT want to run mysql or anything else that’s a memory pig on these.‎

My idea to get my lookup maps in place is just to write a small perl script that dumps my config info from mysql into flat files, uses scp to copy the files over to the backup MXers, and then runs postmap on the output on the backup MXers. Before I go ahead with this, any clever options that I’m overlooking to have the same data on servers using different backing stores for the maps?

Thanks,

Charles
Reply | Threaded
Open this post in threaded view
|

OT (was Re: Backup MX setup - alternative to db?)

Curtis Villamizar-2
Charles,

At one point I used homegrown shell and perl for my CA maintenance,
DNS zone files, and server configs were all in a set of files with
substitutions list ${{HOST}}, ${{DOMAIN}}, ${{FQDN}}, ${{IPv4::fqdn}},
${{IPv6::fqdn}}, and ${{CNAME::fqdn}} used so that a generic config
can cover multiple hosts.  I do have two physical sites about 2 hours
apart so two DNS servers, MTAs, etc.  Each site has a subdomain and
one has multiple subnets each with a subdomain.  I added some CNAMEs
in DNS and for things like ${{default-route.${{DOMAIN}}}} that occur
in configs plus ${{CNAME::msa.${{DOMAIN}}}} and
${{CNAME::msa.${{DOMAIN}}}}.  I used perl and shell to do the
substitutions (looking up DNS stuff in local files, not DNS itself) a
few shell scripts and scp/ssh to distribute files and also gmake to
simplify things a bit more.  rsynce would work as well as scp/ssh but
I'd need the substitution and need to create a local staging dir.

This worked fine for years (over a decade for this set of tools,
almost three decades for this approach).  Somewhat recently the key
rollover handled by the CA tools became problematic so I rewrote that
in C++.  I'm in the process of rewriting the DNS stuff in C++ since
the config language for DNS was ... uhm ... suboptimal (maybe a bit
kludgy).  The DNS tool rewrite will affect the tools downstream.

Becasue of that ongoing rewrite the tools are in slight disarray at
this exact moment so can't share.  I also wouldn't want to share the
tools widely at this point due to insufficient documentation.  I can
set it up, but without documentation this set of code is not a good
solution for others.  Its also a bit quirky and fragile in places.  I
have used this or earlier iterations at previous employers with their
written acknowledgement that they had no IPR claims on the tools.

Shell and perl for substitutions and scp/ssh or rsync for distribution
do work fine.  You can wrap in make or gmake.  The way I did it was
gmake REMOTE_HOST=host_or_fqdn {all,compare,install}, where the make
target "all" mostly checks CA for time to rollover, checks DNS (where
DNS depends on CA for TLSA), checks local files (which depend on DNS
local files) and does substitutions for that host.  If you make an ns
it includes making named.conf and signed zone files.

The goal is to install a host (a physical host or VM or BSD jail) from
scratch (FreeBSD locally compiled distribution, plus locally compiled
packages tar file), add a /root/.ssh/authorized_keys file and "gmake
REMOTE_HOST=fqdn install" and I'm done - just reboot the newly
installed host.  Its almost that easy.  I does install packages (like
openssl and postfix) used by that particular type of host.  I have to
"cd install_certs; gmake REMOTE_HOST=fqdn install" to add TLS key,
cert, and CA cert files for some hosts.

I don't know if this helps since I can't at this time share the tools.
But the point is it can be done and can be improved over time.

Curtis


In message <[hidden email]>
[hidden email] writes:

>
> I've never used rsync in daemon mode (if that is the right way to
> phrase it), but wouldn't that do everything automatically?
>  
> I know on Digital Ocean you can use a special network between
> "droplets" (VMs) that is local. There is no transit cost. Perhaps
> Vultr does the same thing.
>  
> Vultr has a free DNS.  
>  
> If I wasn't running FreeBSD, I'd probably be on Linode.
>  https://www.vpsbenchmarks.com/
>  
>  
>  
>  
>  
>   Original Message  
> From: CSS
> Sent: Friday, April 28, 2017 12:49 PM
> To: Postfix users
> Subject: Backup MX setup - alternative to db?
>  
> Hi all,
>  
> I have a handful of personal domains that I host myself - both as a
> place to experiment a bit (I roll new things out here before using
> them on paying clients), and a place to play with things that don't
> scale well. As of now, I just have a single MXer with a pretty
> standard Postfix setup. Domain/user maps are all in mysql.
>  
> I just grabbed a few VPSs since they are cheap and I wanted to try out
> Vultr.com. I bought the smallest possible - only 512MB of RAM. I'm
> running nsd for DNS services (found setting up two small VPS's to be
> cheaper, more fun than paying for secondary NS), and I'd like to add
> backup MX to both hosts. I do NOT want to run mysql or anything else
> that's a memory pig on these.
>  
> My idea to get my lookup maps in place is just to write a small perl
> script that dumps my config info from mysql into flat files, uses scp
> to copy the files over to the backup MXers, and then runs postmap on
> the output on the backup MXers. Before I go ahead with this, any
> clever options that I'm overlooking to have the same data on servers
> using different backing stores for the maps?
>  
> Thanks,
>  
> Charles
Reply | Threaded
Open this post in threaded view
|

Re: Backup MX setup - alternative to db?

CSS-4
In reply to this post by lists@lazygranch.com

> On Apr 29, 2017, at 6:41 AM, [hidden email] wrote:
>
> I've never used rsync in daemon mode (if that is the right way to phrase it), but wouldn't that do everything automatically?

I’m all set on transferring data, my main interest is in dumping the data from mysql and then creating the same maps in another db storage format like dbm for the other two hosts.

I got excited briefly when I saw the postmap “-s” flag, I was hoping I could use that, but apparently that flag is not yet supported for mysql:

root@nac /home/spork]# postmap -s mysql:/usr/local/etc/postfix/mysql-virtual-mailbox-maps.cf
postmap: fatal: table mysql:/usr/local/etc/postfix/mysql-virtual-mailbox-maps.cf: sequence operation is not supported
[root@nac /home/spork]#

And it looks like the postmap “-q” flag can’t just take a wildcard and return everything.

So my only option is to query mysql directly I think (again, unless someone has some other clever way of doing it!).  Given how modular postfix appears, I was thinking that one of the bundled utilities might be able to dump all the maps in a common format.

Thanks,

Charles

>
> I know on Digital Ocean you can use a special network between "droplets" (VMs) that is local. There is no transit cost. Perhaps Vultr does the same thing.‎
>
> Vultr has a free DNS.  
>
> If I wasn't running FreeBSD, I'd probably be on Linode.
> ‎https://www.vpsbenchmarks.com/
>
>
>
>
>
>   Original Message  
> From: CSS
> Sent: Friday, April 28, 2017 12:49 PM
> To: Postfix users
> Subject: Backup MX setup - alternative to db?
>
> Hi all,
>
> I have a handful of personal domains that I host myself - both as a place to experiment a bit (I roll new things out here before using them on paying clients), and a place to play with things that don’t scale well. As of now, I just have a single MXer with a pretty standard Postfix setup. Domain/user maps are all in mysql.
>
> I just grabbed a few VPSs since they are cheap and I wanted to try out Vultr.com. I bought the smallest possible - only 512MB of RAM. I’m running nsd for DNS services (found setting up two small VPS’s to be cheaper, more fun than paying for secondary NS), and I’d like to add backup MX to both hosts. I do NOT want to run mysql or anything else that’s a memory pig on these.‎
>
> My idea to get my lookup maps in place is just to write a small perl script that dumps my config info from mysql into flat files, uses scp to copy the files over to the backup MXers, and then runs postmap on the output on the backup MXers. Before I go ahead with this, any clever options that I’m overlooking to have the same data on servers using different backing stores for the maps?
>
> Thanks,
>
> Charles

Reply | Threaded
Open this post in threaded view
|

Re: Backup MX setup - alternative to db?

Niklaas Baudet von Gersdorff-2
In reply to this post by CSS-4
CSS [2017-04-28 15:48 -0400] :

> My idea to get my lookup maps in place is just to write a small
> perl script that dumps my config info from mysql into flat
> files, uses scp to copy the files over to the backup MXers, and
> then runs postmap on the output on the backup MXers.  Before
> I go ahead with this, any clever options that I’m overlooking
> to have the same data on servers using different backing stores
> for the maps?

I am not quite sure whether this is what you're looking for but
csync2 [1] is quite useful for keeping files over a cluster in
sync.

    Niklaas

1: http://oss.linbit.com/csync2/

signature.asc (817 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Backup MX setup - alternative to db?

CSS-4
In reply to this post by CSS-4

> On Apr 29, 2017, at 1:12 PM, CSS <[hidden email]> wrote:
>
>
>> On Apr 29, 2017, at 6:41 AM, [hidden email] wrote:
>>
>> I've never used rsync in daemon mode (if that is the right way to phrase it), but wouldn't that do everything automatically?
>
> I’m all set on transferring data, my main interest is in dumping the data from mysql and then creating the same maps in another db storage format like dbm for the other two hosts.
>
> I got excited briefly when I saw the postmap “-s” flag, I was hoping I could use that, but apparently that flag is not yet supported for mysql:
>
> root@nac /home/spork]# postmap -s mysql:/usr/local/etc/postfix/mysql-virtual-mailbox-maps.cf
> postmap: fatal: table mysql:/usr/local/etc/postfix/mysql-virtual-mailbox-maps.cf: sequence operation is not supported
> [root@nac /home/spork]#
>
> And it looks like the postmap “-q” flag can’t just take a wildcard and return everything.
>
> So my only option is to query mysql directly I think (again, unless someone has some other clever way of doing it!).  Given how modular postfix appears, I was thinking that one of the bundled utilities might be able to dump all the maps in a common format.


This is what I ended up with.  It’s pretty basic, and probably can fail in bad ways, but it works.  On the other end there’s a short shell script that puts the files in place and runs postmap on all the relevant files.  The sync directory also includes some other files that I want to have the same, like a badrcpt file, postscreen whitelist, etc.

Charles

#!/usr/local/bin/perl -w

use DBI;
use Net::OpenSSH;

# db connection params
$db_user="mail";
$db_pass=“XXX";
$db_host="localhost";
$db_name="mail";
# files
$domain_file="/usr/local/etc/postfix-sync/relay_domains";
$recip_file="/usr/local/etc/postfix-sync/relay_recipients";
# rsync params
my $rhost = "sea.XXX.com";
my $ruser = "syncer";
my $srcdir  = '/usr/local/etc/postfix-sync/*';
my $destdir = "/home/syncer/postfix/";
my $rkey = "/root/.ssh/id_rsa_syncer”;

# connect to db
my $dbh = DBI->connect ("DBI:mysql:database=$db_name:host=$db_host", $db_user, $db_pass) or die "Can't connect to db: $DBI::errstr\n";

# query domain list
my $sth = $dbh->prepare( "SELECT domain FROM domain WHERE active = '1' AND domain != 'ALL'");

$sth->execute();

open(my $fh, '>', $domain_file) or die "Could not open file '$domain_file' $!”;

print $fh "# start of domains\n";

while ( my @row = $sth->fetchrow_array( ) )  {
         #print "@row\n";
         print $fh "@row\n\tx";
}
warn "Problem in retrieving results", $sth->errstr( ), "\n"
        if $sth->err( );

print $fh "# end of domains\n";
close $fh;

open($fh, '>', $recip_file) or die "Could not open file '$recip_file' $!";

# query alias/mailbox list - this uses the postfixadmin schema
$sth = $dbh->prepare( "SELECT address FROM alias WHERE active = '1'");

$sth->execute();

print $fh "# start of aliases\n";

while ( @row = $sth->fetchrow_array( ) )  {
         #print "@row\n";
         print $fh "@row\tx\n";
}
warn "Problem in retrieving results", $sth->errstr( ), "\n"
        if $sth->err( );

print $fh "# end of aliases\n";
close $fh;

# scp
my $ssh = Net::OpenSSH->new($rhost, user => $ruser, key_path => $rkey);
$ssh->scp_put({glob => 1}, $srcdir, $destdir)
    or die "scp failed: " . $ssh->error;

exit;


>
> Thanks,
>
> Charles
>
>>
>> I know on Digital Ocean you can use a special network between "droplets" (VMs) that is local. There is no transit cost. Perhaps Vultr does the same thing.‎
>>
>> Vultr has a free DNS.  
>>
>> If I wasn't running FreeBSD, I'd probably be on Linode.
>> ‎https://www.vpsbenchmarks.com/
>>
>>
>>
>>
>>
>>  Original Message  
>> From: CSS
>> Sent: Friday, April 28, 2017 12:49 PM
>> To: Postfix users
>> Subject: Backup MX setup - alternative to db?
>>
>> Hi all,
>>
>> I have a handful of personal domains that I host myself - both as a place to experiment a bit (I roll new things out here before using them on paying clients), and a place to play with things that don’t scale well. As of now, I just have a single MXer with a pretty standard Postfix setup. Domain/user maps are all in mysql.
>>
>> I just grabbed a few VPSs since they are cheap and I wanted to try out Vultr.com. I bought the smallest possible - only 512MB of RAM. I’m running nsd for DNS services (found setting up two small VPS’s to be cheaper, more fun than paying for secondary NS), and I’d like to add backup MX to both hosts. I do NOT want to run mysql or anything else that’s a memory pig on these.‎
>>
>> My idea to get my lookup maps in place is just to write a small perl script that dumps my config info from mysql into flat files, uses scp to copy the files over to the backup MXers, and then runs postmap on the output on the backup MXers. Before I go ahead with this, any clever options that I’m overlooking to have the same data on servers using different backing stores for the maps?
>>
>> Thanks,
>>
>> Charles
>