当前位置: 首页>>代码示例>>Python>>正文


Python Logger.info方法代码示例

本文整理汇总了Python中logger.logger.Logger.info方法的典型用法代码示例。如果您正苦于以下问题:Python Logger.info方法的具体用法?Python Logger.info怎么用?Python Logger.info使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在logger.logger.Logger的用法示例。


在下文中一共展示了Logger.info方法的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Python代码示例。

示例1: get_filtered_dbnames

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]
    def get_filtered_dbnames(dbs_all, in_dbs=[], ex_dbs=[], in_regex='',
                             ex_regex='', in_priority=False, logger=None):
        '''
        Target:
            - filter a list of databases' names taking into account inclusion
              and exclusion parameters and their priority.
        Parameters:
            - dbs_all: list to filter.
            - in_dbs: list with the databases' names to include.
            - ex_dbs: list with the databases' names to exclude.
            - in_regex: regular expression which indicates the databases' names
              to include.
            - ex_regex: regular expression which indicates the databases' names
              to exclude.
            - in_priority: a flag which determinates if the inclusion
              parameters must predominate over the exclusion ones.
            - logger: a logger to show and log some messages.
        Return:
            - a filtered list (subset of "dbs_all").
        '''
        if not logger:
            logger = Logger()

        bkp_list = []

        if in_priority:  # If inclusion is over exclusion
            # Apply exclusion first and then inclusion
            bkp_list = DbSelector.dbname_filter_exclude(dbs_all, ex_dbs,
                                                        ex_regex, logger)
            bkp_list = DbSelector.dbname_filter_include(bkp_list, in_dbs,
                                                        in_regex, logger)
        else:
            # Apply inclusion first and then exclusion
            bkp_list = DbSelector.dbname_filter_include(dbs_all, in_dbs,
                                                        in_regex, logger)
            bkp_list = DbSelector.dbname_filter_exclude(bkp_list, ex_dbs,
                                                        ex_regex, logger)

        logger.highlight('info', Messenger.SEARCHING_SELECTED_DBS, 'white')

        if bkp_list == []:
            logger.highlight('warning', Messenger.EMPTY_DBNAME_LIST, 'yellow',
                             effect='bold')
        else:
            for dbname in bkp_list:
                logger.info(Messenger.SELECTED_DB.format(dbname=dbname))
        return bkp_list
开发者ID:alejandrosantana,项目名称:py_pg_tools,代码行数:49,代码来源:db_selector.py

示例2: emails

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]
class Mailer:

    level = 1  # Verbosity level of the email
    from_info = {}  # Information about the sender's email account
    to_infos = []  # List with the destiny emails
    cc_infos = []  # List with the destiny emails (carbon copy)
    bcc_infos = []  # List with the destiny emails (blind carbon copy)
    server_tag = ''  # Alias of the sender's machine
    external_ip = ''  # External IP of the sender's machine
    op_type = ''  # Executed action
    group = None  # Affected group
    bkp_path = None  # Affected path of backups
    logger = None  # Logger to show and log some messages

    # Definition of constants

    OP_TYPES = {
        'u': 'Undefined method',
        'a': 'Alterer',
        'B': 'Backer',
        'd': 'Dropper',
        'r': 'Replicator',
        'R': 'Restorer',
        'T': 'Trimmer',
        't': 'Terminator',
        'v': 'Vacuumer',
    }

    OP_RESULTS = {
        0: ('<h2>{op_type}: <span style="color: green;">OK</span> at '
            '"{server_tag}"</h2>Date: <span style="font-weight: bold">{date}'
            '</span><br/>Time: <span style="font-weight: bold">{time}</span>'
            '<br/>Time zone: <span style="font-weight: bold">{zone}</span>'
            '<br/>Host name: <span style="font-weight: bold">{server}</span>'
            '<br/>Netifaces IPs: <span style="font-weight: bold">'
            '{internal_ips}</span><br/>External IP: <span style="font-weight: '
            'bold">{external_ip}</span><br/>Group: <span style="font-weight: '
            'bold">{group}</span><br/>Path: <span style="font-weight: bold">'
            '{bkp_path}</span><br/><br/><br/>The process has been executed '
            'succesfully.<br/><br/>You can see its log file at the following '
            'path:<br/><br/>{log_file}.'),
        1: ('<h2>{op_type}: <span style="color: orange;">WARNING</span> at '
            '"{server_tag}"</h2>Date: <span style="font-weight: bold">{date}'
            '</span><br/>Time: <span style="font-weight: bold">{time}</span>'
            '<br/>Time zone: <span style="font-weight: bold">{zone}</span>'
            '<br/>Host name: <span style="font-weight: bold">{server}</span>'
            '<br/>Netifaces IPs: <span style="font-weight: bold">'
            '{internal_ips}</span><br/>External IP: <span style="font-weight: '
            'bold">{external_ip}</span><br/>Group: <span style="font-weight: '
            'bold">{group}</span><br/>Path: <span style="font-weight: bold">'
            '{bkp_path}</span><br/><br/><br/>There were some warnings during '
            'the process, but not critical errors. Anyway, please check it, '
            'because its behaviour is not bound to have been the expected '
            'one.<br/><br/>You can see its log file at the following path:'
            '<br/><br/>{log_file}.'),
        2: ('<h2>{op_type}: <span style="color: red;">ERROR</span> at '
            '"{server_tag}"</h2>Date: <span style="font-weight: bold">{date}'
            '</span><br/>Time: <span style="font-weight: bold">{time}</span>'
            '<br/>Time zone: <span style="font-weight: bold">{zone}</span>'
            '<br/>Host name: <span style="font-weight: bold">{server}</span>'
            '<br/>Netifaces IPs: <span style="font-weight: bold">'
            '{internal_ips}</span><br/>External IP: <span style="font-weight: '
            'bold">{external_ip}</span><br/>Group: <span style="font-weight: '
            'bold">{group}</span><br/>Path: <span style="font-weight: bold">'
            '{bkp_path}</span><br/><br/><br/>There were some errors during '
            'the process, and they prevented some operations, because the '
            'execution was truncated. Please check immediately.<br/><br/>You '
            'can see its log file at the following path:<br/><br/>'
            '{log_file}.'),
        3: ('<h2>{op_type}: <span style="color: purple;">CRITICAL</span> at '
            '"{server_tag}"</h2>Date: <span style="font-weight: bold">{date}'
            '</span><br/>Time: <span style="font-weight: bold">{time}</span>'
            '<br/>Time zone: <span style="font-weight: bold">{zone}</span>'
            '<br/>Host name: <span style="font-weight: bold">{server}</span>'
            '<br/>Netifaces IPs: <span style="font-weight: bold">'
            '{internal_ips}</span><br/>External IP: <span style="font-weight: '
            'bold">{external_ip}</span><br/>Group: <span style="font-weight: '
            'bold">{group}</span><br/>Path: <span style="font-weight: bold">'
            '{bkp_path}</span><br/><br/><br/>There were some critical errors '
            'during the process. The execution could not be carried out. '
            'Please check immediately.<br/><br/>You can see its log file at '
            'the following path:<br/><br/>{log_file}.'),
    }

    OP_RESULTS_NO_HTML = {
        0: ('{op_type}: OK at "{server_tag}"\n'
            'Date: {date}\n'
            'Time: {time}\n'
            'Time zone: {zone}\n'
            'Host name: {server}\n'
            'Netifaces IPs: {internal_ips}\n'
            'External IP: {external_ip}\n'
            'Group: {group}\n'
            'Path: {bkp_path}\n'
            'The process has been executed succesfully.\n'
            'You can see its log file at the following path:\n'
            '{log_file}.\n'),
        1: ('{op_type}: WARNING at {server}\n'
            'Date: {date}\n'
            'Time: {time}\n'
#.........这里部分代码省略.........
开发者ID:forvas,项目名称:py_pg_tools,代码行数:103,代码来源:mailer.py

示例3: __init__

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]
class TrimmerCluster:

    bkp_path = ''  # The path where the backups are stored
    prefix = ''  # The prefix of the backups' names
    min_n_bkps = None  # Minimum number of a database's backups to keep
    exp_days = None  # Number of days which make a backup obsolete
    max_size = None  # Maximum size of a group of database's backups
    # Maximum size in Bytes of a group of database's backups
    max_size_bytes = None
    # Related to max_size, equivalence to turn the specified unit of measure in
    # the max_size variable into Bytes
    equivalence = 10 ** 6
    logger = None  # Logger to show and log some messages

    def __init__(self, bkp_path='', prefix='', min_n_bkps=1, exp_days=365,
                 max_size=5000, logger=None):

        if logger:
            self.logger = logger
        else:
            self.logger = Logger()

        if bkp_path and os.path.isdir(bkp_path):
            self.bkp_path = bkp_path
        else:
            self.logger.stop_exe(Messenger.DIR_DOES_NOT_EXIST)

        if prefix is None:
            self.prefix = Default.PREFIX
        else:
            self.prefix = prefix

        if min_n_bkps is None:
            self.min_n_bkps = Default.MIN_N_BKPS
        elif isinstance(min_n_bkps, int):
            self.min_n_bkps = min_n_bkps
        elif Checker.str_is_int(min_n_bkps):
            self.min_n_bkps = Casting.str_to_int(min_n_bkps)
        else:
            self.logger.stop_exe(Messenger.INVALID_MIN_BKPS)

        if exp_days is None:
            self.exp_days = Default.EXP_DAYS
        elif isinstance(exp_days, int) and exp_days >= -1:
            self.exp_days = exp_days
        elif Checker.str_is_valid_exp_days(exp_days):
            self.exp_days = Casting.str_to_int(exp_days)
        else:
            self.logger.stop_exe(Messenger.INVALID_OBS_DAYS)

        if max_size is None:
            self.max_size = Default.MAX_SIZE
        elif Checker.str_is_valid_max_size(max_size):
            self.max_size = max_size
        else:
            self.logger.stop_exe(Messenger.INVALID_MAX_TSIZE)

        # Split a string with size and unit of measure into a dictionary
        self.max_size = Casting.str_to_max_size(self.max_size)
        # Get the equivalence in Bytes of the specified unit of measure
        self.equivalence = Casting.get_equivalence(self.max_size['unit'])
        # Get the specified size in Bytes
        self.max_size_bytes = self.max_size['size'] * self.equivalence

        message = Messenger.CL_TRIMMER_VARS.format(
            bkp_path=self.bkp_path, prefix=self.prefix,
            min_n_bkps=self.min_n_bkps, exp_days=self.exp_days,
            max_size=self.max_size)
        self.logger.debug(Messenger.CL_TRIMMER_VARS_INTRO)
        self.logger.debug(message)

    def trim_cluster(self, ht_bkps_list):
        '''
        Target:
            - remove (if necessary) some cluster's backups, taking into
              account some parameters in the following order: minimum number of
              backups to keep > obsolete backups.
        Parameters:
            - ht_bkps_list: list of backups of a cluster to analyse and trim.
        '''
        if self.exp_days == -1:  # No expiration date
            x_days_ago = None
        else:
            x_days_ago = time.time() - (60 * 60 * 24 * self.exp_days)

        # Store the total number of backups of the cluster
        num_bkps = len(ht_bkps_list)
        # Clone the list to avoid conflict errors when removing
        ht_bkps_lt = ht_bkps_list[:]

        unlinked = False

        self.logger.highlight('info', Messenger.BEGINNING_CL_TRIMMER, 'white')

        start_time = DateTools.get_current_datetime()

        for f in ht_bkps_list:

            # Break if number of backups do not exceed the minimum
            if num_bkps <= self.min_n_bkps:
#.........这里部分代码省略.........
开发者ID:alejandrosantana,项目名称:py_pg_tools,代码行数:103,代码来源:trimmer.py

示例4: __init__

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]
class Terminator:

    target_all = None  # Flag which determinates if terminate any connection
    target_user = None  # Terminate any connection of an specific user
    target_dbs = []  # Terminate any connection to a list of databases
    # An object with connection parameters to connect to PostgreSQL
    connecter = None
    logger = None  # Logger to show and log some messages

    def __init__(self, connecter, target_all=False, target_user='',
                 target_dbs=[], logger=None):

        if logger:
            self.logger = logger
        else:
            self.logger = Logger()

        if connecter:
            self.connecter = connecter
        else:
            self.logger.stop_exe(Messenger.NO_CONNECTION_PARAMS)

        if target_all is None:
            self.target_all = target_all
        elif isinstance(target_all, bool):
            self.target_all = target_all
        elif Checker.str_is_bool(target_all):
            self.target_all = Casting.str_to_bool(target_all)
        else:
            self.logger.stop_exe(Messenger.INVALID_TARGET_ALL)

        self.target_user = target_user

        if target_dbs is None:
            self.target_dbs = []
        elif isinstance(target_dbs, list):
            self.target_dbs = target_dbs
        else:
            self.target_dbs = Casting.str_to_list(target_dbs)

        message = Messenger.TERMINATOR_VARS.format(
            server=self.connecter.server, user=self.connecter.user,
            port=self.connecter.port, target_all=self.target_all,
            target_user=target_user, target_dbs=self.target_dbs)
        self.logger.debug(Messenger.TERMINATOR_VARS_INTRO)
        self.logger.debug(message)

    def terminate_backend_user(self):
        '''
        Target:
            - terminate every connection of a specific user to PostgreSQL (as
              long as the target user is the one who is running the program).
        '''
        message = Messenger.BEGINNING_TERMINATE_USER_CONN.format(
            target_user=self.target_user)
        self.logger.highlight('info', message, 'white')

        try:
            pg_pid = self.connecter.get_pid_str()  # Get PID variable's name

            sql = Queries.GET_CURRENT_PG_USER
            self.connecter.cursor.execute(sql)
            current_pg_user = self.connecter.cursor.fetchone()[0]

            if self.target_user == current_pg_user:
                message = Messenger.TARGET_USER_IS_CURRENT_USER.format(
                    target_user=self.target_user)
                self.logger.highlight('warning', message, 'yellow')

            else:
                formatted_sql = Queries.BACKEND_PG_USER_EXISTS.format(
                    pg_pid=pg_pid, target_user=self.target_user)
                self.connecter.cursor.execute(formatted_sql)
                result = self.connecter.cursor.fetchone()

                if result:
                    formatted_sql = Queries.TERMINATE_BACKEND_PG_USER.format(
                        pg_pid=pg_pid, target_user=self.target_user)
                    self.connecter.cursor.execute(formatted_sql)
                else:
                    message = Messenger.NO_USER_CONNS.format(
                        target_user=self.target_user)
                    self.logger.info(message)

            message = Messenger.TERMINATE_USER_CONN_DONE.format(
                target_user=self.target_user)
            self.logger.highlight('info', message, 'green')

        except Exception as e:
            self.logger.debug('Error en la función "terminate_backend_user": '
                              '{}.'.format(str(e)))
            message = Messenger.TERMINATE_USER_CONN_FAIL.format(
                target_user=self.target_user)
            self.logger.highlight('warning', message, 'yellow', effect='bold')

        self.logger.highlight('info', Messenger.TERMINATOR_DONE, 'green')

    def terminate_backend_db(self, target_db):
        '''
        Target:
#.........这里部分代码省略.........
开发者ID:alejandrosantana,项目名称:py_pg_tools,代码行数:103,代码来源:terminator.py

示例5: file

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]
class Restorer:

    # An object with connection parameters to connect to PostgreSQL
    connecter = None
    logger = None  # Logger to show and log some messages
    db_backup = ''  # Absolute path of the backup file (of a database)
    new_dbname = ''  # New name for the database restored in PostgreSQL

    def __init__(self, connecter=None, db_backup='', new_dbname='',
                 logger=None):

        if logger:
            self.logger = logger
        else:
            self.logger = Logger()

        if connecter:
            self.connecter = connecter
        else:
            self.logger.stop_exe(Messenger.NO_CONNECTION_PARAMS)

        if db_backup and os.path.isfile(db_backup):
            self.db_backup = db_backup
        else:
            self.logger.stop_exe(Messenger.NO_BKP_TO_RESTORE)

        if new_dbname:
            self.new_dbname = new_dbname
        else:
            self.logger.stop_exe(Messenger.NO_DBNAME_TO_RESTORE)

        message = Messenger.DB_RESTORER_VARS.format(
            server=self.connecter.server, user=self.connecter.user,
            port=self.connecter.port, db_backup=self.db_backup,
            new_dbname=self.new_dbname)
        self.logger.debug(Messenger.DB_RESTORER_VARS_INTRO)
        self.logger.debug(message)

    def restore_db_backup(self):
        '''
        Target:
            - restore a database's backup in PostgreSQL.
        '''
        #replicator = Replicator(self.connecter, self.new_dbname,
                                #Default.RESTORING_TEMPLATE, self.logger)
        #result = self.connecter.allow_db_conn(Default.RESTORING_TEMPLATE)
        #if result:
            #replicator.replicate_pg_db()
            #self.connecter.disallow_db_conn(Default.RESTORING_TEMPLATE)
        #else:
            #self.logger.stop_exe(Messenger.ALLOW_DB_CONN_FAIL.format(
                #dbname=Default.RESTORING_TEMPLATE))

        # Regular expression which must match the backup's name
        regex = r'.*db_(.+)_(\d{8}_\d{6}_.+)\.(dump|bz2|gz|zip)$'
        regex = re.compile(regex)

        if re.match(regex, self.db_backup):
            # Store the parts of the backup's name (name, date, ext)
            parts = regex.search(self.db_backup).groups()
            # Store only the extension to know the type of file
            ext = parts[2]
        else:
            self.logger.stop_exe(Messenger.NO_BACKUP_FORMAT)

        message = Messenger.BEGINNING_DB_RESTORER.format(
            db_backup=self.db_backup, new_dbname=self.new_dbname)
        self.logger.highlight('info', message, 'white')
        self.logger.info(Messenger.WAIT_PLEASE)

        if ext == 'gz':
            command = 'gunzip -c {} -k | pg_restore -U {} -h {} -p {} ' \
                      '-d {}'.format(self.db_backup, self.connecter.user,
                                     self.connecter.server,
                                     self.connecter.port, self.new_dbname)
        elif ext == 'bz2':
            command = 'bunzip2 -c {} -k | pg_restore -U {} -h {} -p {} ' \
                      '-d {}'.format(self.db_backup, self.connecter.user,
                                     self.connecter.server,
                                     self.connecter.port, self.new_dbname)
        elif ext == 'zip':
            command = 'unzip -p {} | pg_restore -U {} -h {} -p {} ' \
                      '-d {}'.format(self.db_backup, self.connecter.user,
                                     self.connecter.server,
                                     self.connecter.port, self.new_dbname)
        else:
            command = 'pg_restore -U {} -h {} -p {} -d {} {}'.format(
                self.connecter.user, self.connecter.server,
                self.connecter.port, self.new_dbname, self.db_backup)

        try:
            start_time = DateTools.get_current_datetime()
            # Make the restauration of the database
            result = subprocess.call(command, shell=True)
            end_time = DateTools.get_current_datetime()
            # Get and show the process' duration
            diff = DateTools.get_diff_datetimes(start_time, end_time)

            if result != 0:
                raise Exception()
#.........这里部分代码省略.........
开发者ID:anubia,项目名称:py_pg_tools,代码行数:103,代码来源:restorer.py

示例6: __init__

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]
class Scheduler:

    time = ''  # Time when the command is going to be executed in Cron
    command = ''  # Command which is going to be executed in Cron.
    logger = None  # Logger to show and log some messages

    def __init__(self, time='', command='', logger=None):

        if logger:
            self.logger = logger
        else:
            self.logger = Logger()

        self.time = time.strip()
        self.command = command.strip()

    def show_lines(self):
        '''
        Target:
            - show the lines of the program's CRON file.
        '''
        self.logger.highlight('info', Messenger.SHOWING_CRONTAB_FILE, 'white')
        print()

        cron = CronTab(user=True)

        if cron:
            for line in cron.lines:
                print(str(line))
        else:
            print('\033[1;40;93m' + Messenger.NO_CRONTAB_FILE + '\033[0m')

    def add_line(self):
        '''
        Target:
            - add a line to the program's CRON file.
        '''
        cron = CronTab(user=True)

        job = cron.new(command=self.command)

        if self.time in ['@yearly', '@annually']:
            job.setall('0 0 1 1 *')
        elif self.time == '@monthly':
            job.setall('0 0 1 * *')
        elif self.time == '@weekly':
            job.setall('0 0 * * 0')
        elif self.time in ['@daily', '@midnight']:
            job.setall('0 0 * * *')
        elif self.time == '@hourly':
            job.setall('0 * * * *')
        elif self.time == '@reboot':
            job.every_reboot()
        else:
            job.setall(self.time)

        self.logger.highlight('info', Messenger.SCHEDULER_ADDING, 'white')

        if not cron:
            self.logger.info(Messenger.CREATING_CRONTAB)

        try:
            cron.write()
            self.logger.highlight('info', Messenger.SCHEDULER_ADD_DONE,
                                  'green')
            #print(cron.render())

        except Exception as e:
            self.logger.debug('Error en la función "add_line": {}.'.format(
                str(e)))
            self.logger.stop_exe(Messenger.SCHEDULER_ADD_FAIL)

    def remove_line(self):
        '''
        Target:
            - remove a line from the program's CRON file.
        '''
        self.logger.highlight('info', Messenger.SCHEDULER_REMOVING, 'white')

        cron = CronTab(user=True)

        if not cron:
            self.logger.stop_exe(Messenger.NO_CRONTAB_FILE)

        deletion = False

        line = self.time + ' ' + self.command

        for job in cron:

            if str(job).strip() == line:

                try:
                    cron.remove(job)
                    message = Messenger.SCHEDULER_REMOVE_DONE.format(job=job)
                    self.logger.highlight('info', message, 'green')
                    deletion = True

                except Exception as e:
                    self.logger.debug('Error en la función "remove_line": '
#.........这里部分代码省略.........
开发者ID:alejandrosantana,项目名称:py_pg_tools,代码行数:103,代码来源:scheduler.py

示例7: __init__

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]
class Alterer:

    in_dbs = []  # List of databases to be included in the process
    old_role = ''  # Current owner of the database's tables
    new_role = ''  # New owner for the database and its tables
    # An object with connection parameters to connect to PostgreSQL
    connecter = None
    logger = None  # Logger to show and log some messages

    def __init__(self, connecter=None, in_dbs=[], old_role='', new_role='',
                 logger=None):

        if logger:
            self.logger = logger
        else:
            self.logger = Logger()

        if connecter:
            self.connecter = connecter
        else:
            self.logger.stop_exe(Msg.NO_CONNECTION_PARAMS)

        if isinstance(in_dbs, list):
            self.in_dbs = in_dbs
        else:
            self.in_dbs = Casting.str_to_list(in_dbs)

        if old_role:
            self.old_role = old_role
        else:
            self.logger.stop_exe(Msg.NO_OLD_ROLE)

        if not new_role:
            self.logger.stop_exe(Msg.NO_NEW_ROLE)
        # First check whether the user exists in PostgreSQL or not
        self.connecter.cursor.execute(Queries.PG_USER_EXISTS, (new_role, ))
        # Do not alter database if the user does not exist
        result = self.connecter.cursor.fetchone()
        if result:
            self.new_role = new_role
        else:
            msg = Msg.USER_DOES_NOT_EXIST.format(user=new_role)
            self.logger.stop_exe(msg)

        msg = Msg.ALTERER_VARS.format(
            server=self.connecter.server, user=self.connecter.user,
            port=self.connecter.port, in_dbs=self.in_dbs,
            old_role=self.old_role, new_role=self.new_role)
        self.logger.debug(Msg.ALTERER_VARS_INTRO)
        self.logger.debug(msg)

    def alter_db_owner(self, db):
        '''
        Target:
            - change the owner of a databases and its tables.
        Parameters:
            - db: database which is going to be altered.
        Return:
            - a boolean which indicates the success of the process.
        '''
        msg = Msg.ALTERER_FEEDBACK.format(old_role=self.old_role,
                                          new_role=self.new_role)
        self.logger.info(msg)

        success = True
        dbname = db['datname']

        if db['owner'] != 'postgres':  # Do not allow switch an owner postgres

            if db['datallowconn'] == 1:  # Check if the db allows connections

                try:
                    # Change the owner of the database
                    self.connecter.cursor.execute(
                        Queries.CHANGE_PG_DB_OWNER.format(
                            dbname=dbname, new_role=self.new_role))

                except Exception as e:
                    success = False
                    self.logger.debug('Error en la función "alter_db_owner": '
                                      '{}'.format(str(e)))
                    msg = Msg.CHANGE_PG_DB_OWNER_FAIL
                    self.logger.highlight('warning', msg, 'yellow')

                # Start another connection to the target database to be able to
                # apply the next query
                own_connecter = Connecter(server=self.connecter.server,
                                          user=self.connecter.user,
                                          port=self.connecter.port,
                                          database=dbname, logger=self.logger)

                # Disallow connections to the database during the process
                result = self.connecter.disallow_db_conn(dbname)
                if not result:
                    msg = Msg.DISALLOW_CONN_TO_PG_DB_FAIL.format(dbname=dbname)
                    self.logger.highlight('warning', msg, 'yellow')

                try:
                    # Change the owner of the database's tables
                    own_connecter.cursor.execute(
#.........这里部分代码省略.........
开发者ID:alejandrosantana,项目名称:py_pg_tools,代码行数:103,代码来源:alterer.py

示例8: __init__

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]

#.........这里部分代码省略.........
        msg = Msg.CL_BACKER_VARS.format(
            server=self.connecter.server, user=self.connecter.user,
            port=self.connecter.port, bkp_path=self.bkp_path, group=self.group,
            bkp_type=self.bkp_type, prefix=self.prefix, vacuum=self.vacuum)
        self.logger.debug(Msg.CL_BACKER_VARS_INTRO)
        self.logger.debug(msg)

    def backup_all(self, bkps_dir):
        '''
        Target:
            - make a backup of a cluster.
        Parameters:
            - bkps_dir: directory where the backup is going to be stored.
        Return:
            - a boolean which indicates the success of the process.
        '''
        success = True
        # Get date and time of the zone
        init_ts = DateTools.get_date()
        # Get current year
        year = str(DateTools.get_year(init_ts))
        # Get current month
        month = str(DateTools.get_month(init_ts))
        # Create new directories with the year and the month of the backup
        bkp_dir = bkps_dir + year + '/' + month + '/'
        Dir.create_dir(bkp_dir, self.logger)

        # Set backup's name
        file_name = self.prefix + 'ht_' + self.connecter.server + \
            str(self.connecter.port) + '_cluster_' + init_ts + '.' + \
            self.bkp_type

        # Store the command to do depending on the backup type
        if self.bkp_type == 'gz':  # Zip with gzip
            command = 'pg_dumpall -U {} -h {} -p {} | gzip > {}'.format(
                self.connecter.user, self.connecter.server,
                self.connecter.port, bkp_dir + file_name)
        elif self.bkp_type == 'bz2':  # Zip with bzip2
            command = 'pg_dumpall -U {} -h {} -p {} | bzip2 > {}'.format(
                self.connecter.user, self.connecter.server,
                self.connecter.port, bkp_dir + file_name)
        elif self.bkp_type == 'zip':  # Zip with zip
            command = 'pg_dumpall -U {} -h {} -p {} | zip > {}'.format(
                self.connecter.user, self.connecter.server,
                self.connecter.port, bkp_dir + file_name)
        else:  # Do not zip
            command = 'pg_dumpall -U {} -h {} -p {} > {}'.format(
                self.connecter.user, self.connecter.server,
                self.connecter.port, bkp_dir + file_name)
        try:
            # Execute the command in console
            result = subprocess.call(command, shell=True)
            if result != 0:
                raise Exception()

        except Exception as e:
            self.logger.debug('Error en la función "backup_all": {}.'.format(
                str(e)))
            success = False

        return success

    def backup_cl(self):
        '''
        Target:
            - vacuum if necessary and make a backup of a cluster.
        '''
        self.logger.highlight('info', Msg.CHECKING_BACKUP_DIR, 'white')

        # Create a new directory with the name of the group
        bkps_dir = self.bkp_path + self.group + Default.CL_BKPS_DIR
        Dir.create_dir(bkps_dir, self.logger)

        self.logger.info(Msg.DESTINY_DIR.format(path=bkps_dir))

        # Vaccum the databases before the backup process if necessary
        if self.vacuum:
            vacuumer = Vacuumer(connecter=self.connecter, logger=self.logger)
            dbs_all = vacuumer.connecter.get_pg_dbs_data(vacuumer.ex_templates,
                                                         vacuumer.db_owner)
            vacuumer.vacuum_dbs(dbs_all)

        self.logger.highlight('info', Msg.BEGINNING_CL_BACKER, 'white')

        start_time = DateTools.get_current_datetime()
        # Make the backup of the cluster
        success = self.backup_all(bkps_dir)
        end_time = DateTools.get_current_datetime()
        # Get and show the process' duration
        diff = DateTools.get_diff_datetimes(start_time, end_time)

        if success:
            msg = Msg.CL_BACKER_DONE.format(diff=diff)
            self.logger.highlight('info', msg, 'green', effect='bold')
        else:
            self.logger.highlight('warning', Msg.CL_BACKER_FAIL,
                                  'yellow', effect='bold')

        self.logger.highlight('info', Msg.BACKER_DONE, 'green',
                              effect='bold')
开发者ID:alejandrosantana,项目名称:py_pg_tools,代码行数:104,代码来源:backer.py

示例9: process

# 需要导入模块: from logger.logger import Logger [as 别名]
# 或者: from logger.logger.Logger import info [as 别名]

#.........这里部分代码省略.........
        elif self.bkp_type == 'bz2':  # Zip with bzip2
            command = 'pg_dump {} -Fc -U {} -h {} -p {} | bzip2 > {}'.format(
                dbname, self.connecter.user, self.connecter.server,
                self.connecter.port, bkp_dir + file_name)
        elif self.bkp_type == 'zip':  # Zip with zip
            command = 'pg_dump {} -Fc -U {} -h {} -p {} | zip > {}'.format(
                dbname, self.connecter.user, self.connecter.server,
                self.connecter.port, bkp_dir + file_name)
        else:  # Do not zip
            command = 'pg_dump {} -Fc -U {} -h {} -p {} > {}'.format(
                dbname, self.connecter.user, self.connecter.server,
                self.connecter.port, bkp_dir + file_name)

        try:
            # Execute the command in console
            result = subprocess.call(command, shell=True)
            if result != 0:
                raise Exception()

        except Exception as e:
            self.logger.debug('Error en la función "backup_db": {}.'.format(
                str(e)))
            success = False

        return success

    def backup_dbs(self, dbs_all):
        '''
        Target:
            - make a backup of some specified databases.
        Parameters:
            - dbs_all: names of the databases which are going to be backuped.
        '''
        self.logger.highlight('info', Msg.CHECKING_BACKUP_DIR, 'white')

        # Create a new directory with the name of the group
        bkps_dir = self.bkp_path + self.group + Default.DB_BKPS_DIR
        Dir.create_dir(bkps_dir, self.logger)

        self.logger.info(Msg.DESTINY_DIR.format(path=bkps_dir))

        self.logger.highlight('info', Msg.PROCESSING_DB_BACKER, 'white')

        if dbs_all:
            for db in dbs_all:

                dbname = db['datname']
                msg = Msg.PROCESSING_DB.format(dbname=dbname)
                self.logger.highlight('info', msg, 'cyan')

                # Let the user know whether the database connection is allowed
                if not db['datallowconn']:
                    msg = Msg.FORBIDDEN_DB_CONNECTION.format(dbname=dbname)
                    self.logger.highlight('warning', msg, 'yellow',
                                          effect='bold')
                    success = False

                else:
                    # Vaccum the database before the backup process if
                    # necessary
                    if self.vacuum:
                        self.logger.info(Msg.PRE_VACUUMING_DB.format(
                            dbname=dbname))
                        vacuumer = Vacuumer(self.connecter, self.in_dbs,
                                            self.in_regex, self.in_priority,
                                            self.ex_dbs, self.ex_regex,
开发者ID:alejandrosantana,项目名称:py_pg_tools,代码行数:70,代码来源:backer.py


注:本文中的logger.logger.Logger.info方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。