본문 바로가기

리눅스

[draft] 우분투에 APT Cacher 서버를 구축하는 방법

728x90

우분투에 APT Cacher 서버(apt-cacher-ng)를 구축하는 방법

APT Cacher 서버를 설정하면 로컬 네트워크에서 패키지 다운로드를 가속화하고 대역폭을 절약할 수 있습니다.

 

출처-https://blog.ichasco.com/wp-content/uploads/2017/03/Selecci%C3%B3n_053.png

테스트 환경

서버 아이피 역할
APT-Cacher 10.0.2.15(nat), 192.168.56.101  
APT-Client 192.168.56.201  

1. apt-cacher-ng 설치

apt-cacher-ng 패키지를 설치합니다.

sudo apt-get update
sudo apt-get install -y apt-cacher-ng

a1

apt-cacher-ng 서비스 활성화 및 시작

sudo systemctl --now enable apt-cacher-ng

2. 설정 파일 편집

apt-cacher-ng의 설정 파일은 /etc/apt-cacher-ng/acng.conf에 있습니다.

더보기

---

sudo cat /etc/apt-cacher-ng/acng.conf
#
# IMPORTANT NOTE:
#
# THIS FILE IS MAYBE JUST ONE OF MANY CONFIGURATION FILES IN THIS DIRECTORY.
# SETTINGS MADE IN OTHER FILES CAN OVERRIDE VALUES THAT YOU CHANGE HERE. GO
# LOOK FOR OTHER CONFIGURATION FILES! CHECK THE MANUAL AND INSTALLATION NOTES
# (like README.Debian) FOR MORE DETAILS!
#

# This is a configuration file for apt-cacher-ng, a smart caching proxy for
# software package downloads. It's supposed to be in a directory specified by
# the -c option of apt-cacher-ng, see apt-cacher-ng(8) for details.
# RULES:
# - letter case in variable names does not matter
# - names and values are separated by colon or equals sign
# - for boolean variables, zero means false, non-zero means true
# - "default value" means built-in (!) defaults, i.e. something which the
#   program uses if the option is not set here or in other config files.
#   That value might be explicitly mentioned in the description. Where it is
#   not, there is no reason to assume any of the examples to be the default
#   value! In doubt, use acngtool to query the value of the particular variable.

# Storage directory for downloaded data and related maintenance activity.
#
# Note: When the value for CacheDir is changed, change the file
# /lib/systemd/system/apt-cacher-ng.service too
#
CacheDir: /var/cache/apt-cacher-ng

# Log file directory, can be set empty to disable logging
#
LogDir: /var/log/apt-cacher-ng

# A place to look for additional configuration and resource files if they are not
# found in the configuration directory
#
SupportDir: /usr/lib/apt-cacher-ng

# TCP server port for incoming http (or HTTP proxy) connections.
# Can be set to 9999 to emulate apt-proxy. Value of 0 turns off TCP server
# (SocketPath must be set in this case).
#
# Port:3142

# Addresses or hostnames to listen on. Multiple addresses must be separated by
# spaces. Each entry must be an exact local address which is associated with a
# local interface. DNS resolution is performed using getaddrinfo(3) for all
# available protocols (IPv4, IPv6, ...). Using a protocol specific format will
# create binding(s) only on protocol specific socket(s), e.g. 0.0.0.0 will
# listen only to IPv4. The endpoint can also be specified as host:port (or
# [ipv6-address]:port) which allows binding on non-standard ports (Port
# directive is ignored in this case).
#
# Default: listens on all interfaces and protocols
#
# BindAddress: localhost 192.168.7.254 publicNameOnMainInterface

# The specification of another HTTP proxy which shall be used for downloads.
# It can include user name and password but see the manual for limitations.
#
# Default: uses direct connection
#
# Proxy: http://www-proxy.example.net:3128
# Proxy: https://username:proxypassword@proxy.example.net:3129

# Repository remapping. See manual for details.
# In this example, some backends files might be generated during package
# installation using information collected on the system.
# Examples:
Remap-debrep: file:deb_mirror*.gz /debian ; file:backends_debian # Debian Archives
Remap-uburep: file:ubuntu_mirrors /ubuntu ; file:backends_ubuntu # Ubuntu Archives
Remap-klxrep: file:kali_mirrors /kali ; file:backends_kali # Kali Linux Archives
Remap-cygwin: file:cygwin_mirrors /cygwin # ; file:backends_cygwin # incomplete, please create this file or specify preferred mirrors here
Remap-sfnet:  file:sfnet_mirrors # ; file:backends_sfnet # incomplete, please create this file or specify preferred mirrors here
Remap-alxrep: file:archlx_mirrors /archlinux # ; file:backend_archlx # Arch Linux
Remap-fedora: file:fedora_mirrors # Fedora Linux
Remap-epel:   file:epel_mirrors # Fedora EPEL
Remap-slrep:  file:sl_mirrors # Scientific Linux
Remap-gentoo: file:gentoo_mirrors.gz /gentoo ; file:backends_gentoo # Gentoo Archives
Remap-secdeb: security.debian.org security.debian.org/debian-security deb.debian.org/debian-security /debian-security cdn-fastly.deb.debian.org/debian-security ; deb.debian.org/debian-security security.debian.org cdn-fastly.deb.debian.org/debian-security

# Virtual page accessible in a web browser to see statistics and status
# information, i.e. under http://localhost:3142/acng-report.html
# NOTE: This option must be configured to run maintenance jobs (even when used
# via acngtool in cron scripts). The AdminAuth option can be used to restrict
# access to sensitive areas on that page.
#
# Default: not set, should be set by the system administrator
#
ReportPage: acng-report.html

# Socket file for accessing through local UNIX socket instead of TCP/IP. Can be
# used with inetd (via bridge tool in.acng from apt-cacher-ng package), is also
# used internally for administrative purposes.
#
# Default: /run/apt-cacher-ng/socket
#
# SocketPath: /var/run/apt-cacher-ng/socket

# If set to 1, makes log files be written to disk on every new line. Default
# is 0, buffers are flushed after the client disconnects. Technically,
# it's a convenience alias for the Debug option, see below for details.
#
# UnbufferLogs: 0

# Enables extended client information in log entries. When set to 0, only
# activity type, time and transfer sizes are logged.
#
# VerboseLog: 1

# Don't detach from the starting console.
#
# ForeGround: 0

# Store the pid of the daemon process in the specified text file.
# Default: disabled
#
# PidFile: /var/run/apt-cacher-ng/pid

# Forbid outgoing connections and work without an internet connection or
# respond with 503 error where it's not possible.
#
# Offlinemode: 0

# Forbid downloads from locations that are directly specified in the user
# request, i.e. all downloads must be processed by the preconfigured remapping
# backends (see above).
#
# ForceManaged: 0

# Days before considering an unreferenced file expired (to be deleted).
# WARNING: if the value is set too low and particular index files are not
# available for some days (mirror downtime) then there is a risk of removal of
# still useful package files.
#
ExThreshold: 4

# If set to true, the removal (i.e. response status 404) of remote
# volatile/index files is considered a hint to consider the local cached
# versions irrelevant and also expire them just like package files. This adds
# some risk of removing too much cache contents in cases where a middlebox
# reports bogus 404 codes.
#
# If false (0), a less sloppy algorithm is used to invalidate certain keyfiles
# first, which might subsequently expire the cache contents but much later or
# maybe never unless the administrator intervenes.
#
FollowIndexFileRemoval: 1

# If the expiration is run daily, it sometimes does not make much sense to do
# it because the expected changes (i.e. removal of expired files) don't justify
# the extra processing time or additional downloads for expiration operation
# itself. This discrepancy might be especially worse if the local client
# installations are small or are rarely updated but the daily changes of
# the remote archive metadata are heavy.
#
# The following option enables a possible trade-off: the expiration run is
# suppressed until a certain amount of data has been downloaded through
# apt-cacher-ng since the last expiration execution (which might indicate that
# packages were replaced with newer versions).
#
# The number can have a suffix (k,K,m,M for Kb,KiB,Mb,MiB)
#
# ExStartTradeOff: 500m

# Stop expiration when a critical problem appears, issue like a failed update
# of an index file in the preparation step.
#
# WARNING: don't set this option to zero or empty without considering possible
# consequences like a sudden and complete cache data loss.
#
# ExAbortOnProblems: 1

# Number of failed nightly expiration runs which are considered acceptable and
# do not trigger an error notification to the admin (e.g. via daily cron job)
# before the (day) count is reached. Might be useful with whacky internet
# connections.
#
# Default: a guessed value, 1 if ExThreshold is 5 or more, 0 otherwise.
#
# ExSuppressAdminNotification: 1

# Modify file names to work around limitations of some file systems.
# WARNING: experimental feature, subject to change
#
# StupidFs: 0

# Experimental feature for apt-listbugs: pass-through SOAP requests and
# responses to/from bugs.debian.org.
# Default: guessed value, true unless ForceManaged is enabled
#
# ForwardBtsSoap: 1

# There is a small in-memory cache for DNS resolution data, expired by
# this timeout (in seconds). Internal caching is disabled if set to a value
# less than zero.
#
# DnsCacheSeconds: 1800

###############################################################################
#
# WARNING: don't modify thread and file matching parameters without a clear
# idea of what is happening behind the scene!
#
# Max. count of connection threads kept ready (for faster response in the
# future). Should be a sane value between 0 and average number of connections,
# and depend on the amount of spare RAM.
# MaxStandbyConThreads: 8
#
# Hard limit of active thread count for incoming connections, i.e. operation
# is refused when this value is reached (below zero = unlimited).
# MaxConThreads: -1
#
# Timeout for a forced disconnect in cases where a client connection is about
# to be closed but remote refuses to confirm the disconnect request. Setting
# this to a lower value mitigates the effects of resource starvation in case of
# a DOS attack but increases the risk of failing to flush the remaining portion
# of data.
# DisconnectTimeout: 15

# By default, if a remote suddenly reconnects, ACNG tries at least two times to
# redownload from the same or different location (if known).
# DlMaxRetries: 2

# Pigeonholing files (like static vs. volatile contents) is done by (extended)
# regular expressions.
#
# The following patterns are available for the purposes detailed, where
# the latter takes precedence over the former:
# - «PFilePattern» for static data that doesn't change silently on the server.
# - «VFilePattern» for volatile data that may change like every hour. Files
#   that match both PFilePattern and VfilePattern will be treated as volatile.
# - Static data with file names that match VFilePattern may be overriden being
#   treated as volatile by making it match the special static data pattern,
#   «SPfilePattern».
# - «SVfilePattern» or the "special volatile data" pattern is for the
#   convenience of specifying any exceptions to matches with SPfilePattern,
#   for cases where data must still be treated as volatile.
# - «WfilePattern» specifies a "whitelist pattern" for the regular expiration
#   job, telling it to keep the files even if they are not referenced by
#   others, like crypto signatures with which clients begin their downloads.
#
# There are two versions. The pattern variables mentioned above should not be
# set without good reason, because they would override the built-in defaults
# (that might impact updates to future versions of apt-cacher-ng). There are
# also versions of those patterns ending with Ex, which may be modified by the
# local administrator. They are evaluated in addition to the regular patterns
# at runtime.
#
# To see examples of the expected syntax, run: apt-cacher-ng -p debug=1
#
# PfilePatternEx:
# VfilePatternEx:
# SPfilePatternEx:
# SVfilePatternEx:
# WfilePatternEx:
#
###############################################################################

# A bitmask type value declaring the loging verbosity and behavior of the error
# log writing. Non-zero value triggers at least faster log file flushing.
#
# Some higher bits only working with a special debug build of apt-cacher-ng,
# see the manual for details.
#
# WARNING: this can write significant amount of data into apt-cacher.err logfile.
#
# Default: 0
#
# Debug:3

# Usually, general purpose proxies like Squid expose the IP address of the
# client user to the remote server using the X-Forwarded-For HTTP header. This
# behaviour can be optionally turned on with the Expose-Origin option.
#
# ExposeOrigin: 0

# When logging the originating IP address, trust the information supplied by
# the client in the X-Forwarded-For header.
#
# LogSubmittedOrigin: 0

# The version string reported to the peer, to be displayed as HTTP client (and
# version) in the logs of the mirror.
#
# WARNING: Expect side effects! Some archives use this header to guess
# capabilities of the client (i.e. allow redirection and/or https links) and
# change their behaviour accordingly but ACNG might not support the expected
# features.
#
# Default:
#
# UserAgent: Yet Another HTTP Client/1.2.3p4

# In some cases the Import and Expiration tasks might create fresh volatile
# data for internal use by reconstructing them using patch files. This
# by-product might be recompressed with bzip2 and with some luck the resulting
# file becomes identical to the *.bz2 file on the server which can be used by
# APT when requesting a complete version of this file.
# The downside of this feature is higher CPU load on the server during
# the maintenance tasks, and the outcome might have not much value in a LAN
# where all clients update their data often and regularly and therefore usually
# don't need the full version of the index file.
#
# RecompBz2: 0

# Network timeout for outgoing connections, in seconds.
#
# NetworkTimeout: 40

# Fast fallback timeout, in seconds. This is the time to wait before
# alternative target addresses for a client connection are tried, which can be
# usefull for quick fallback to IPv4 in case of whacky IPv6 configuration.
#
# FastTimeout = 4

# Sometimes it makes sense to not store the data in cache and just return the
# package data to client while it comes in. The following DontCache* parameters
# can enable this behaviour for certain URL types. The tokens are extended
# regular expressions which the URLs are evaluated against.
#
# DontCacheRequested is applied to the URL as it comes in from the client.
# Example: exclude packages built with kernel-package for x86
# DontCacheRequested: linux-.*_10\...\.Custo._i386
# Example usecase: exclude popular private IP ranges from caching
# DontCacheRequested: 192.168.0 ^10\..* 172.30
#
# DontCacheResolved is applied to URLs after mapping to the target server. If
# multiple backend servers are specified then it's only matched against the
# download link for the FIRST possible source (due to implementation limits).
#
# Example usecase: all Ubuntu stuff comes from a local mirror (specified as
# backend), don't cache it again:
# DontCacheResolved: ubuntumirror.local.net
#
# DontCache directive sets (overrides) both, DontCacheResolved and
# DontCacheRequested.  Provided for convenience, see those directives for
# details.
#
# Example:
# DontCache: .*.local.university.int

# Default permission set of freshly created files and directories, as octal
# numbers (see chmod(1) for details).
# Can by limited by the umask value (see umask(2) for details) if it's set in
# the environment of the starting shell, e.g. in apt-cacher-ng init script or
# in its configuration file.
#
# DirPerms: 00755
# FilePerms: 00664

# It's possible to use use apt-cacher-ng as a regular web server with a limited
# feature set, i.e. directory browsing, downloads of any files, Content-Type
# based on /etc/mime.types, but without sorting, CGI execution, index page
# redirection and other funny things.
# To get this behavior, mappings between virtual directories and real
# directories on the server must be defined with the LocalDirs directive.
# Virtual and real directories are separated by spaces, multiple pairs are
# separated by semi-colons. Real directories must be absolute paths.
# NOTE: Since the names of that key directories share the same namespace as
# repository names (see Remap-...) it is administrator's job to avoid conflicts
# between them or explicitly create them.
#
# LocalDirs: woo /data/debarchive/woody ; hamm /data/debarchive/hamm
LocalDirs: acng-doc /usr/share/doc/apt-cacher-ng

# Precache a set of files referenced by specified index files. This can be used
# to create a partial mirror usable for offline work. There are certain limits
# and restrictions on the path specification, see manual and the cache control
# web site for details. A list of (maybe) relevant index files could be
# retrieved via "apt-get --print-uris update" on a client machine.
#
# Example:
# PrecacheFor: debrep/dists/unstable/*/source/Sources* debrep/dists/unstable/*/binary-amd64/Packages*

# Arbitrary set of data to append to request headers sent over the wire. Should
# be a well formated HTTP headers part including newlines (DOS style) which
# can be entered as escape sequences (\r\n).
#
# RequestAppendix: X-Tracking-Choice: do-not-track\r\n

# Specifies the IP protocol families to use for remote connections. Order does
# matter, first specified are considered first. Possible combinations:
# v6 v4
# v4 v6
# v6
# v4
# Default: use native order of the system's TCP/IP stack, influenced by the
# BindAddress value.
#
# ConnectProto: v6 v4

# Regular expiration algorithm finds package files which are no longer listed
# in any index file and removes them of them after a safety period.
# This option allows to keep more versions of a package in the cache after
# the safety period is over.
#
# KeepExtraVersions: 0

# Optionally uses TCP access control provided by libwrap, see hosts_access(5)
# for details. Daemon name is apt-cacher-ng.
#
# Default: guessed on startup by looking for explicit mention of apt-cacher-ng
# in /etc/hosts.allow or /etc/hosts.deny files.
#
# UseWrap: 0

# If many machines from the same local network attempt to update index files
# (apt-get update) at nearly the same time, the known state of these index file
# is temporarily frozen and multiple requests receive the cached response
# without contacting the remote server again. This parameter (in seconds)
# specifies the length of this period before these (volatile) files are
# considered outdated.
# Setting this value too low transfers more data and increases remote server
# load, setting this too high (more than a couple of minutes) increases the
# risk of delivering inconsistent responses to the clients.
#
# FreshIndexMaxAge: 27

# Usually the users are not allowed to specify custom TCP ports of remote
# mirrors in the requests, only the default HTTP port can be used (as
# workaround, proxy administrator can create Remap- rules with custom ports).
# This restriction can be disabled by specifying a list of allowed ports or 0
# for any port.
#
# AllowUserPorts: 80

# Normally the HTTP redirection responses are forwarded to the original caller
# (i.e. APT) which starts a new download attempt from the new URL. This
# solution is ok for client configurations with proxy mode but doesn't work
# well with configurations using URL prefixes in sources.list. To work around
# this the server can restart its own download with a redirection URL,
# configured with the following option. The downside is that this might be used
# to circumvent download source policies by malicious users.
# The RedirMax option specifies how many such redirects the server is allowed
# to follow per request, 0 disables the internal redirection.
# Default: guessed on startup, 0 if ForceManaged is used and 5 otherwise.
#
# RedirMax: 5

# There some broken HTTP servers and proxy servers in the wild which don't
# support the If-Range header correctly and return incorrect data when the
# contents of a (volatile) file changed. This also applies to incomplete
# resumed downloads.  Setting VfileUseRangeOps to 0 disables Range-based
# requests (using purely If-Modified-Since and requesting the complete file
# instead, if changed). Setting it to a negative value removes even this check
# and means fetching the whole file from the beginning.
#
# VfileUseRangeOps: 1

# Allow data pass-through mode for certain hosts when requested by the client
# using a CONNECT request. This is particularly useful to allow access to SSL
# sites (https proxying). The string is a regular expression which should cover
# the server name with port and must be correctly formated and terminated.
# Examples:
# PassThroughPattern: private-ppa\.launchpad\.net:443$
# PassThroughPattern: .* # this would allow CONNECT to everything
#
# Default: ^(bugs\.debian\.org|changelogs\.ubuntu\.com):443$
# PassThroughPattern: ^(bugs\.debian\.org|changelogs\.ubuntu\.com):443$

# Interval an overaged local cache item (i.e. active file descriptor) can be
# considered broken so that a new forced download can be started. Such
# situation can happen when a very slow clients keeps a hot cache item active
# for extended amounts of time so that even the remote freshness checks
# intervals might become overrun.
#
# Default time is based on the value of FreshIndexMaxAge with a safety factor.
#
# ResponseFreezeDetectTime: 60

# Keep outgoing connections alive and reuse them for later downloads from
# the same server as long as possible.
#
# ReuseConnections: 1

# Maximum number of requests sent in a batch to remote servers before the first
# response is expected. Using higher values can greatly improve average
# throughput depending on network latency and the implementation of remote
# servers. Makes most sense when also enabled on the client side, see apt.conf
# documentation for details.
#
# Default: 10 if ReuseConnections is set, 1 otherwise
#
# PipelineDepth: 10

# Path to the system directory containing trusted CA certificates used for
# outgoing connections, see OpenSSL documentation for details.
#
# CApath: /etc/ssl/certs
#
# Path to a single trusted trusted CA certificate used for outgoing
# connections, see OpenSSL documentation for details.
#
# CAfile:

# There are different ways to detect that an upstream proxy is broken and turn
# off its use and connect directly. The first is through a custom command -
# when it returns successfully, the proxy is used, otherwise not and the
# command will be rerun only after a specified period.
# Another way is to try to connect to the proxy first and detect a connection
# timeout. The connection will then be made without HTTP proxy for the life
# time of the particular download stream and it may also affect other other
# parallel downloads.
# NOTE: this operation modes are still experimental and are subject to change!
# Unwanted side effects may occur with multiple simultaneous user connections
# or with specific per-repository proxy settings.
#
# Shell command, default: not set. Executed with the default shell and
# permissions of the apt-cacher-ng's process user. Examples:
# /bin/ip route | grep -q 192.168.117
# /usr/sbin/arp | grep -q 00:22:1f:51:8e:c1
#
# OptProxyCheckCommand: ...
#
# Check intervall, in seconds.
#
# OptProxyCheckInterval: 99
#
# Conection timeout in seconds, default: negative, means disabled.
#
# OptProxyTimeout: -1

# It's possible to limit the processing speed of download agents to set an
# overall download speed limit. Unit: KiB/s, Default: unlimited.
#
# MaxDlSpeed: 500

# In special corner cases, download clients attempt to download random chunks
# of a files headers, i.e. the first kilobytes. The "don't get client stuck"
# policy converts this usually to a 200 response starting the body from the
# beginning but that confuses some clients. When this option is set to a
# certain value, this modifies the behaviour and allows to start a file
# download where the distance between available data and the specified range
# lies within that bounds. This can look like random lag for the user but
# should be harmless apart from that.
#
# MaxInresponsiveDlSize: 64000

# In mobile environments having an adhoc connection with a redirection to some
# id verification side, this redirect might damage the cache since the data is
# involuntarily stored as package data. There is a mechanism which attempts to
# detect a such situation and mitigate the mentioned effects by not storing the
# data and also dropping the DNS cache. The trigger is the occurrence of a
# specific SUBSTRING in the content type field of the final download target
# (i.e. the auth web site) and at least one followed redirection.
#
# BadRedirDetectMime: text/html

# When a BUS signal is received (typically on IO errors), a shell command can be
# executed before the daemon is terminated.
# Example:
# BusAction: ls -l /proc/$PPID/ | mail -s SIGBUS! root

# Only set this value for debugging purposes. It disables SSL security checks
# like strict host verification. 0 means no, any other value can have
# differrent meaning in the future.
#
# NoSSLChecks: 0

# Setting this value means: on file downloads from/via cache, tag relevant
# files. And when acngtool runs the shrink command, it will look at the day
# when the file was retrieved from cache last time (and not when it was
# originally downloaded).
#
# TrackFileUse: 0

# Controls preallocation of file system space where this feature is supported.
# This might reduce disk fragmentation and therefore improve later read
# performance. However, write performance can be reduced which could be
# exploited by malicious users.
# The value defines a size limit of how much to report to the OS as expected
# file size (starting from the beginning of the file).
# Set to zero to disable this feature completely. Default: one megabyte
#
# ReserveSpace: 1048576

# PermitCacheControl will allow users to specify a few hints for processing
# of a request, for example bypassing the local cache (see
# https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control for
# no-cache, no-store).
#
# PermitCacheControl: no-cache, no-store

---

$ sudo cat /etc/apt-cacher-ng/acng.conf | egrep 'CacheDir|Port:3142'
# Note: When the value for CacheDir is changed, change the file
CacheDir: /var/cache/apt-cacher-ng
# Port:3142
sudo vim /etc/apt-cacher-ng/acng.conf

다음과 같은 설정을 변경하거나 추가할 수 있습니다.

  • CacheDir : 캐시 디렉토리의 경로를 지정합니다. 기본값은 /var/cache/apt-cacher-ng입니다.
  • Port : APT Cacher 서버의 포트 번호를 지정합니다. 기본값은 3142입니다.

옵션을 수정하고 저장한 후 설정 파일을 닫습니다.

728x90

3. 서비스 재시작

설정을 변경한 후 APT Cacher 서비스를 다시 시작합니다.

sudo systemctl restart apt-cacher-ng

APT Cacher 서비스 상태 확인

sudo systemctl status apt-cacher-ng
$ sudo systemctl status apt-cacher-ng
● apt-cacher-ng.service - Apt-Cacher NG software download proxy
     Loaded: loaded (/lib/systemd/system/apt-cacher-ng.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-10-16 16:20:47 KST; 1min 30s ago
   Main PID: 666 (apt-cacher-ng)
      Tasks: 1 (limit: 4558)
     Memory: 6.2M
        CPU: 89ms
     CGroup: /system.slice/apt-cacher-ng.service
             └─666 /usr/sbin/apt-cacher-ng -c /etc/apt-cacher-ng ForeGround=1

Oct 16 16:20:46 aptmirror systemd[1]: Starting Apt-Cacher NG software download proxy...
Oct 16 16:20:47 aptmirror systemd[1]: Started Apt-Cacher NG software download proxy.

APT Cacher 서비스 LISTEN 포트 확인

$ ss -nlpt | egrep 3142
LISTEN 0      250          0.0.0.0:3142      0.0.0.0:*    users:(("apt-cacher-ng",pid=666,fd=11))
LISTEN 0      250             [::]:3142         [::]:*    users:(("apt-cacher-ng",pid=666,fd=12))

4. 클라이언트 설정

이제 클라이언트 시스템에서 APT를 사용하여 APT Cacher 서버를 지정합니다.

/etc/apt/apt.conf.d/02proxy라는 파일을 생성하고 다음과 같은 내용을 추가합니다.

vim /etc/apt/apt.conf.d/02proxy
Acquire::http { Proxy "http://<APT-CACHER-NG-SERVER-IP>:3142"; };

<APT-CACHER-NG-SERVER-IP>를 실제 APT Cacher 서버의 IP 주소로 바꿉니다.

Acquire::http { Proxy "http://192.168.56.101:3142"; };

또는 echo 명령어로 추가합니다.

echo 'Acquire::http { Proxy "http://192.168.56.101:3142"; };' > /etc/apt/apt.conf.d/02proxy

/etc/apt/apt.conf.d/02proxy 파일 확인

$ cat /etc/apt/apt.conf.d/02proxy
Acquire::http { Proxy "http://192.168.56.101:3142"; };

5. 테스트

APT Cacher 서버를 구축한 후, 클라이언트 시스템에서 패키지 업데이트를 수행하거나 패키지를 설치하여 APT Cacher 서버를 사용하는지 확인합니다.

 apt-get update
$ apt update
Hit:1 http://kr.archive.ubuntu.com/ubuntu jammy InRelease
Get:2 http://kr.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Hit:3 http://kr.archive.ubuntu.com/ubuntu jammy-backports InRelease
Get:4 http://kr.archive.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Fetched 229 kB in 4s (57.4 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
26 packages can be upgraded. Run 'apt list --upgradable' to see them.

패키지를 다운로드하는 동안 APT Cacher 서버를 통해 패키지가 캐싱됩니다.

더보기

---

APT Cacher 사용 전

$ apt-get update
Ign:1 http://kr.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 http://kr.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:3 http://kr.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:4 http://kr.archive.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://kr.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 http://kr.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:3 http://kr.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:4 http://kr.archive.ubuntu.com/ubuntu jammy-security InRelease
Ign:1 http://kr.archive.ubuntu.com/ubuntu jammy InRelease
Ign:2 http://kr.archive.ubuntu.com/ubuntu jammy-updates InRelease
Ign:3 http://kr.archive.ubuntu.com/ubuntu jammy-backports InRelease
Ign:4 http://kr.archive.ubuntu.com/ubuntu jammy-security InRelease
Err:1 http://kr.archive.ubuntu.com/ubuntu jammy InRelease
  Temporary failure resolving 'kr.archive.ubuntu.com'
Err:2 http://kr.archive.ubuntu.com/ubuntu jammy-updates InRelease
  Temporary failure resolving 'kr.archive.ubuntu.com'
Err:3 http://kr.archive.ubuntu.com/ubuntu jammy-backports InRelease
  Temporary failure resolving 'kr.archive.ubuntu.com'
Err:4 http://kr.archive.ubuntu.com/ubuntu jammy-security InRelease
  Temporary failure resolving 'kr.archive.ubuntu.com'
Reading package lists... Done
W: Failed to fetch http://kr.archive.ubuntu.com/ubuntu/dists/jammy/InRelease  Temporary failure resolving 'kr.archive.ubuntu.com'
W: Failed to fetch http://kr.archive.ubuntu.com/ubuntu/dists/jammy-updates/InRelease  Temporary failure resolving 'kr.archive.ubuntu.com'
W: Failed to fetch http://kr.archive.ubuntu.com/ubuntu/dists/jammy-backports/InRelease  Temporary failure resolving 'kr.archive.ubuntu.com'
W: Failed to fetch http://kr.archive.ubuntu.com/ubuntu/dists/jammy-security/InRelease  Temporary failure resolving 'kr.archive.ubuntu.com'
W: Some index files failed to download. They have been ignored, or old ones used instead.

---

 

이러한 단계를 따르면 Ubuntu에서 APT Cacher 서버를 구축하고 로컬 네트워크에서 패키지를 효율적으로 캐싱할 수 있습니다. 이를 통해 패키지 다운로드 속도가 향상되고 대역폭을 절약할 수 있습니다.

 

참고URL
- blog.ichasco.com : APT-Cacher-ng: Repositorio local cacheado

 

728x90