Git Product home page Git Product logo

haplog's Introduction

haplog

⚠️ DEPRECATE: This project has been deprecated since loguru has most of the features that haplog originally intended to develop. Happy logging guys :)

Content

Provides enhancements for logging functionality, including:

  1. Color the texts of logging:
    # https://talyian.github.io/ansicolors/
    # black = "\x1b[30;20m"
    red = "\x1b[31;20m"
    bold_red = "\x1b[31;1m"
    green = "\x1b[32;20m"
    yellow = "\x1b[33;20m"
    # blue = "\x1b[34;20m"
    # magenta = "\x1b[35;20m"
    # cyan = "\x1b[36;20m"
    # white = "\x1b[37;20m"
    grey = "\x1b[38;20m"
    # \x1b[38;2;r;g;bm - foreground
    # \x1b[48;2;r;g;bm - background
    reset = "\x1b[0m"
    self.formats = {
    logging.DEBUG: grey + costom_format + reset,
    logging.INFO: green + costom_format + reset,
    logging.WARNING: yellow + costom_format + reset,
    logging.ERROR: red + costom_format + reset,
    logging.CRITICAL: bold_red + costom_format + reset,
    }
  2. Redirect the standard output of third-party modules to log records, which is usually used when developers are unwilling to spend time manually changing print() to the corresponding logging function.
    class OutputLogger:
    """
    serves as a pseudo file-like stream object that redirects written content to a logger instance.
    It overrides the `write` method to append the written messages to an internal buffer
    (`linebuf`). When a message ends with a newline character, it logs the buffered messages
    as a log record.
    """
    with redirect_stdout(
    OutputLogger(logger_name=LOGGER_NAME, logging_level=logging.DEBUG) # type: ignore
    ):
    print(MESSAGE + " by print()")
    third_party_function()
  3. Logging (of cource):
    class MultiProcessLogger:
    """
    Implements a custom logger designed for multi-process environments.
    It is based on the 'Logging Cookbook - Logging to a single file from multiple processes' example
    in the Python documentation. It utilizes a multiprocessing queue and a listener process to
    enable logging across multiple processes. The class provides functionality for logging records
    to a file and printing logs to the console.
    """
    1. single-process:
      def single_process():
      mpl = MultiProcessLogger(log_folder, level_console=logging.DEBUG)
      mpl.start()
      worker_configurer(mpl.queue) # type: ignore
      logger = logging.getLogger(LOGGER_NAME)
      with redirect_stdout(
      OutputLogger(logger_name=LOGGER_NAME, logging_level=logging.DEBUG) # type: ignore
      ):
      print(MESSAGE + " by print()")
      third_party_function()
      logger.debug(MESSAGE)
      logger.info(MESSAGE)
      logger.warning(MESSAGE)
      logger.error(MESSAGE)
      logger.critical(MESSAGE)
      mpl.join()
    2. multi-process:
      def multi_process():
      mpl = MultiProcessLogger(log_folder, level_console=logging.DEBUG)
      mpl.start()
      with concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor:
      for _ in range(10):
      executor.submit(worker_process, mpl.queue, worker_configurer)
      mpl.join()
pip install "https://github.com/changchiyou/haplog/archive/main.zip"

Download and execute /examples/demo_haplog.py:

import concurrent.futures
import logging
import logging.handlers
import multiprocessing
from contextlib import redirect_stdout
from pathlib import Path
from haplog import MultiProcessLogger, OutputLogger, worker_configurer
LOGGER_NAME = "test"
MESSAGE = "test"
log_folder = (Path(__file__).parent) / "logs"
if log_folder.exists() is False:
log_folder.mkdir()
def third_party_function():
print(MESSAGE + " by third_party_function()")
def single_process():
mpl = MultiProcessLogger(log_folder, level_console=logging.DEBUG)
mpl.start()
worker_configurer(mpl.queue) # type: ignore
logger = logging.getLogger(LOGGER_NAME)
with redirect_stdout(
OutputLogger(logger_name=LOGGER_NAME, logging_level=logging.DEBUG) # type: ignore
):
print(MESSAGE + " by print()")
third_party_function()
logger.debug(MESSAGE)
logger.info(MESSAGE)
logger.warning(MESSAGE)
logger.error(MESSAGE)
logger.critical(MESSAGE)
mpl.join()
def worker_process(queue, configurer):
import time
from random import random
configurer(queue)
name = multiprocessing.current_process().name
logger = logging.getLogger(name)
logger.info("Worker started: %s" % name)
time.sleep(random())
logger.info("Worker finished: %s" % name)
def multi_process():
mpl = MultiProcessLogger(log_folder, level_console=logging.DEBUG)
mpl.start()
with concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor:
for _ in range(10):
executor.submit(worker_process, mpl.queue, worker_configurer)
mpl.join()
if __name__ == "__main__":
single_process()
multi_process()

image

Check ANSI Color Codes for more personalized color schemes.

haplog's People

Contributors

changchiyou avatar

Watchers

 avatar

haplog's Issues

缺漏待補:`pytest`單元測試

在更新 3a0ddb0 後,原先的單元測試完全不能用了:

import logging
from pathlib import Path
from haplog import instantiate_logger
LOGGER_NAME = 'test'
MESSAGE = 'test'
def test_console_info(capsys):
instantiate_logger(LOGGER_NAME)
logger = logging.getLogger(LOGGER_NAME)
logger.info(MESSAGE)
captured = capsys.readouterr()
assert f'INFO {Path(__file__).name} - test_console_info() : {MESSAGE}' in captured.err
def test_console_debug(capsys):
instantiate_logger(LOGGER_NAME, level_console=logging.DEBUG)
logger = logging.getLogger(LOGGER_NAME)
logger.debug(MESSAGE)
captured = capsys.readouterr()
assert f'DEBUG {Path(__file__).name} - test_console_debug() : {MESSAGE}' in captured.err
def test_console_debug_info_level(capsys):
instantiate_logger(LOGGER_NAME)
logger = logging.getLogger(LOGGER_NAME)
logger.debug(MESSAGE)
captured = capsys.readouterr()
assert f'DEBUG {Path(__file__).name} - test_console_debug() : {MESSAGE}' not in captured.err
def test_log(tmp_path, capsys):
instantiate_logger(LOGGER_NAME, tmp_path)
logger = logging.getLogger(LOGGER_NAME)
logger.info(MESSAGE)
logger.debug(MESSAGE)
captured = capsys.readouterr()
assert f'INFO {Path(__file__).name} - test_log() : {MESSAGE}' in captured.err
assert f'DEBUG {Path(__file__).name} - test_log() : {MESSAGE}' not in captured.err
File = open(tmp_path / 'record', mode='r')
contents = File.read()
File.close()
assert f'INFO {Path(__file__).name} - test_log() : {MESSAGE}' in contents
assert f'DEBUG {Path(__file__).name} - test_log() : {MESSAGE}' in contents

有嘗試更新但因為架構完全不一樣了導致目前完全沒有頭緒,最後決定在 StackOverflow 上面發問 pytest unit test for "logging+multi-process+QueueHandler"

新增 Single/Multi - Process 快捷使用

藉由保有原先功能並「新增」對於 Single/Multi - Process 的快捷使用進一步提升使用便捷性。

原先開發者想要用haplog紀錄 single-process 的 script 時,需要 init multiProcessLoggerstart()、還需要執行worker_configurer(),最後所有東西弄完還要記得join()

def single_process():
mpl = MultiProcessLogger(log_folder, level_console=logging.DEBUG)
mpl.start()
worker_configurer(mpl.queue)
logger = logging.getLogger(LOGGER_NAME)
with redirect_stdout(
OutputLogger(logger_name=LOGGER_NAME, logging_level=logging.DEBUG) # type: ignore
):
print(MESSAGE + " by print()")
third_party_function()
logger.debug(MESSAGE)
logger.info(MESSAGE)
logger.warning(MESSAGE)
logger.error(MESSAGE)
logger.critical(MESSAGE)
mpl.join()

Multi-process 也是差不多,甚至因為 python 的 build-in lib multiprocessing中用到了 pickle,會導致一些錯誤

Basically, the reason you are getting this error is because multiprocessing uses pickle, which can only serialize top-module level functions in general.

雖說有解決方法,但因為目前haplog只提供 log listner,沒有進一步提供 Multi-process 的套裝方法,所以現階段只能靠開發者自己根據專案需求調整。

有一個最簡單的實現方法就是另外編寫一個 Multi-process 的套裝方法,讓使用者決定要不要寫,這樣有兩個好處:

  1. 使用haplog的開發者更加輕鬆
  2. haplog做單元測試時同樣更加輕鬆

針對 #5 (comment) 中提到的重構方向執行(將原先使用multiprocessing會遇到的問題解決):

https://stackoverflow.com/questions/72766345/attributeerror-cant-pickle-local-object-in-multiprocessing


如果嘗試在test_function()中另外創一個 function 並將其作為target傳入multiprocessing.Process就會發生問題,如:

import multiprocessing

def test_function():
    def another_function():
        ...
    multiprocessing.Process(target=another_function)

就會發生錯誤:

FAILED tests/test_multi_process_logger.py::test_console_default_info_multi_process_normal - AttributeError: Can't pickle local object 'test_console_default_info_multi_process_normal.<locals>.logs'

這不是haplog本身的問題,是multiprocessing這個 standard lib 用的是 pickle,根據 https://stackoverflow.com/a/72776044 所述:

Basically, the reason you are getting this error is because multiprocessing uses pickle, which can only serialize top-module level functions in general.

而在同時他也提供了多種解決方案:

其中除了 Method 1Method 2 以外,剩下兩個方法都還不錯,尤其是 Method 2b 的工程量看起來不大,感覺可以直接做在haplog裡面。

單元測試、文件更新

雖說在 #3 已經「更新」了舊的單元測試,但還是缺漏了針對多重執行序是否能正常運作的單元測試。


另外README.MD也應該更新,先前在 3146156 根據 Logging to a single file from multiple processes 中的 Using concurrent.futures.ProcessPoolExecutor

queue = multiprocessing.Queue(-1)改成queue = multiprocessing.Manager().Queue(-1)後確實能將原先繁雜的

workers = []
for i in range(10):
    worker = multiprocessing.Process(target=worker_process,
                                     args=(queue, worker_configurer))
    workers.append(worker)
    worker.start()
for w in workers:
    w.join()

改成用簡短的

with concurrent.futures.ProcessPoolExecutor(max_workers=10) as executor:
    for i in range(10):
        executor.submit(worker_process, queue, worker_configurer)

就能跑沒錯,但應該還是要保留比較繁雜的版本,因為不是所有可能會用到的工具都內建支援concurrent.futures.ProcessPoolExecutor ,如 akatrevorjay/pytutils#2

錯誤訊息洗頻

如果運行到一半的專案出問題了,haplog不會關掉,反而會無限循環噴錯(這邊拿某個 private repo 開發到一半遇到的錯誤來示範):

...
2023-06-17 19:20:12,219 DEBUG    [facepcs] realtime_recognition.py - loop() : start
Traceback (most recent call last):
  File "/Users/christopherchang/miniconda3/envs/facepcs/bin/facepcs-cli", line 33, in <module>
    sys.exit(load_entry_point('facepcs', 'console_scripts', 'facepcs-cli')())
  File "/Users/christopherchang/Work/FacePCS/python-package/facepcs/commands/facepcs_cli.py", line 40, in main
    service.run()
  File "/Users/christopherchang/Work/FacePCS/python-package/facepcs/commands/realtime.py", line 205, in run
    realtime()
  File "/Users/christopherchang/Work/FacePCS/python-package/facepcs/app/realtime_recognition.py", line 303, in realtime
    realtime_recognition.loop()
  File "/Users/christopherchang/Work/FacePCS/python-package/facepcs/app/realtime_recognition.py", line 117, in loop
    if np.ndarray((pre_frame == frame)).all():
ValueError: maximum supported dimension for an ndarray is 32, found 720
Oops! An issue occurred during the logging process:
Traceback (most recent call last):
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/haplog/logger_utils.py", line 160, in listener_process
    record = _queue.get()
  File "<string>", line 2, in get
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/managers.py", line 818, in _callmethod
    kind, result = conn.recv()
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 250, in recv
    buf = self._recv_bytes()
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 383, in _recv
    raise EOFError
EOFError
Oops! An issue occurred during the logging process:
Traceback (most recent call last):
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/haplog/logger_utils.py", line 160, in listener_process
    record = _queue.get()
  File "<string>", line 2, in get
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/managers.py", line 817, in _callmethod
    conn.send((self._id, methodname, args, kwds))
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 411, in _send_bytes
    self._send(header + buf)
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

後面就是無限迴圈:

Oops! An issue occurred during the logging process:
Traceback (most recent call last):
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/site-packages/haplog/logger_utils.py", line 160, in listener_process
    record = _queue.get()
  File "<string>", line 2, in get
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/managers.py", line 817, in _callmethod
    conn.send((self._id, methodname, args, kwds))
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 206, in send
    self._send_bytes(_ForkingPickler.dumps(obj))
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 411, in _send_bytes
    self._send(header + buf)
  File "/Users/christopherchang/miniconda3/envs/facepcs/lib/python3.10/multiprocessing/connection.py", line 368, in _send
    n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.