{"id":334,"date":"2011-03-08T05:41:36","date_gmt":"2011-03-08T05:41:36","guid":{"rendered":"http:\/\/gigihfordanama.wordpress.com\/?p=334"},"modified":"2012-08-08T00:13:16","modified_gmt":"2012-08-08T00:13:16","slug":"small-note-about-etcsysctl-conf-tuning-on-freebsd","status":"publish","type":"post","link":"https:\/\/dosen.unila.ac.id\/gigih\/2011\/03\/08\/small-note-about-etcsysctl-conf-tuning-on-freebsd\/","title":{"rendered":"Small Note about \/etc\/sysctl.conf tuning on FreeBSD"},"content":{"rendered":"<p>Just make a note,\u00a0 better than forgot..<\/p>\n<p>#=========================================================================================<br \/>\n# $FreeBSD: src\/etc\/sysctl.conf,v 1.8.32.1 2009\/04\/15 03:14:26 kensmith Exp $<br \/>\n#<br \/>\n#\u00a0 This file is read when going to multi-user and its contents piped thru<br \/>\n#\u00a0 &#8220;sysctl&#8221; to adjust kernel values.\u00a0 &#8220;man 5 sysctl.conf&#8221; for details.<br \/>\n# \u00a0 Uncomment this to prevent users from seeing information about processes that<br \/>\n# are being run under another UID.<\/p>\n<p>net.inet.tcp.log_in_vain=1 (to display tcp\/udp log connection from client\u00a0 on \/var\/log\/messages )<\/p>\n<p>security.bsd.see_other_uids=0<br \/>\nsecurity.bsd.see_other_gids=0<br \/>\n<!--more--># No zero mapping feature<br \/>\n# May break wine<br \/>\n# (There are also reports about broken samba3)<br \/>\n#security.bsd.map_at_zero=0<\/p>\n<p># If you have really busy webserver with apache13 you may run out of proccess<br \/>\n#kern.maxproc=10000<br \/>\n# Same for servers with apache2 \/ Pound<br \/>\n#kern.threads.max_threads_per_proc=4096<\/p>\n<p># Max. backlog size<br \/>\nkern.ipc.somaxconn=4096<\/p>\n<p># Shared memory \/\/ 7.2+ can use shared memory &gt; 2Gb<br \/>\nkern.ipc.shmmax=2147483648<\/p>\n<p># Sockets<br \/>\nkern.ipc.maxsockets=204800<br \/>\n# Do not use lager sockbufs on 8.0<br \/>\n# ( <a title=\"http:\/\/old.nabble.com\/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981\" href=\"http:\/\/old.nabble.com\/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981\">http:\/\/old.nabble.com\/Significant-performance-regression-for-increased-m&#8230;<\/a> )<br \/>\nkern.ipc.maxsockbuf=262144<\/p>\n<p># Recive clusters (on amd64 7.2+ 65k is default)<br \/>\n# For such high value vm.kmem_size must be increased to 3G<br \/>\n#kern.ipc.nmbclusters=229376<\/p>\n<p># Jumbo pagesize(4k\/8k) clusters<br \/>\n# Used as general packet storage for jumbo frames<br \/>\n# can be monitored via `netstat -m`<br \/>\n#kern.ipc.nmbjumbop=192000<\/p>\n<p># Jumbo 9k\/16k clusters<br \/>\n# If you are using them<br \/>\n#kern.ipc.nmbjumbo9=24000<br \/>\n#kern.ipc.nmbjumbo16=10240<\/p>\n<p># Every socket is a file, so increase them<br \/>\nkern.maxfiles=204800<br \/>\nkern.maxfilesperproc=200000<br \/>\nkern.maxvnodes=200000<\/p>\n<p># Turn off receive autotuning<br \/>\n#net.inet.tcp.recvbuf_auto=0<\/p>\n<p># Small receive space, only usable on http-server, on file server this<br \/>\n# should be increased to 65535 or even more<br \/>\n#net.inet.tcp.recvspace=8192<\/p>\n<p># Small send space is useful for http servers that serve small files<br \/>\n# Autotuned since 7.x<br \/>\nnet.inet.tcp.sendspace=16384<\/p>\n<p># This should be enabled if you going to use big spaces (&gt;64k)<br \/>\n#net.inet.tcp.rfc1323=1<br \/>\n# Turn this off on highspeed, lossless connections (LAN 1Gbit+)<br \/>\n#net.inet.tcp.delayed_ack=0<\/p>\n<p># This feature is useful if you are serving data over modems, Gigabit Ethernet,<br \/>\n# or even high speed WAN links (or any other link with a high bandwidth delay product),<br \/>\n# especially if you are also using window scaling or have configured a large send window.<br \/>\n# You can try setting it to 0 on fileserver with 1GBit+ interfaces<br \/>\n# Automatically disables on small RTT ( <a title=\"http:\/\/www.freebsd.org\/cgi\/cvsweb.cgi\/src\/sys\/netinet\/tcp_subr.c?#rev1.237\" href=\"http:\/\/www.freebsd.org\/cgi\/cvsweb.cgi\/src\/sys\/netinet\/tcp_subr.c?#rev1.237\">http:\/\/www.freebsd.org\/cgi\/cvsweb.cgi\/src\/sys\/netinet\/tcp_subr.c?#rev1.237<\/a> )<br \/>\n#net.inet.tcp.inflight.enable=0<\/p>\n<p># Disable randomizing of ports to avoid false RST<br \/>\n# Before usage check SA here <a title=\"www.bsdcan.org\/2006\/papers\/ImprovingTCPIP.pdf\" href=\"http:\/\/www.bsdcan.org\/2006\/papers\/ImprovingTCPIP.pdf\">www.bsdcan.org\/2006\/papers\/ImprovingTCPIP.pdf<\/a><br \/>\n# (it&#8217;s also says that port randomization auto-disables at some conn.rates, but I didn&#8217;t tested it thou)<br \/>\n#net.inet.ip.portrange.randomized=0<\/p>\n<p># Increase portrange<br \/>\n# For outgoing connections only. Good for seed-boxes and ftp servers.<br \/>\nnet.inet.ip.portrange.first=1024<br \/>\nnet.inet.ip.portrange.last=65535<\/p>\n<p># Security<br \/>\nnet.inet.ip.redirect=0<br \/>\nnet.inet.ip.sourceroute=0<br \/>\nnet.inet.ip.accept_sourceroute=0<br \/>\nnet.inet.icmp.maskrepl=0<br \/>\nnet.inet.icmp.log_redirect=0<br \/>\nnet.inet.icmp.drop_redirect=1<br \/>\nnet.inet.tcp.drop_synfin=1<\/p>\n<p># Security<br \/>\nnet.inet.udp.blackhole=1<br \/>\nnet.inet.tcp.blackhole=2<\/p>\n<p># Increases default TTL, sometimes useful<br \/>\n# Default is 64<br \/>\nnet.inet.ip.ttl=128<\/p>\n<p># Lessen max segment life to conserve resources<br \/>\n# ACK waiting time in miliseconds (default: 30000 from RFC)<br \/>\nnet.inet.tcp.msl=5000<\/p>\n<p># Max bumber of timewait sockets<br \/>\nnet.inet.tcp.maxtcptw=40960<br \/>\n# Don&#8217;t use tw on local connections<br \/>\n# As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization.<br \/>\n# So disable it or now till get fixed<br \/>\n#net.inet.tcp.nolocaltimewait=1<\/p>\n<p># FIN_WAIT_2 state fast recycle<br \/>\nnet.inet.tcp.fast_finwait2_recycle=1<\/p>\n<p># Time before tcp keepalive probe is sent<br \/>\n# default is 2 hours (7200000)<br \/>\n#net.inet.tcp.keepidle=60000<\/p>\n<p># Should be increased until net.inet.ip.intr_queue_drops is zero<br \/>\nnet.inet.ip.intr_queue_maxlen=4096<\/p>\n<p># Interrupt handling via multiple CPU, but with context switch.<br \/>\n# You can play with it. Default is 1;<br \/>\n#net.isr.direct=0<\/p>\n<p># This is for routers only<br \/>\nnet.inet.ip.forwarding=1<br \/>\n#net.inet.ip.fastforwarding=1<\/p>\n<p># This speed ups dummynet when channel isn&#8217;t saturated<br \/>\nnet.inet.ip.dummynet.io_fast=1<br \/>\n# Increase dummynet(4) hash<br \/>\n#net.inet.ip.dummynet.hash_size=2048<br \/>\n#net.inet.ip.dummynet.max_chain_len<\/p>\n<p># Should be increased when you have A LOT of files on server<br \/>\n# (Increase until vfs.ufs.dirhash_mem becames lower)<br \/>\nvfs.ufs.dirhash_maxmem=67108864<\/p>\n<p># Explicit Congestion Notification (see <a title=\"http:\/\/en.wikipedia.org\/wiki\/Explicit_Congestion_Notification\" href=\"http:\/\/en.wikipedia.org\/wiki\/Explicit_Congestion_Notification\">http:\/\/en.wikipedia.org\/wiki\/Explicit_Congestion_Notification<\/a>)<br \/>\nnet.inet.tcp.ecn.enable=1<\/p>\n<p># Flowtable &#8211; flow caching mechanism<br \/>\n# Useful for routers<br \/>\n#net.inet.flowtable.enable=1<br \/>\n#net.inet.flowtable.nmbflows=65535<\/p>\n<p># Extreme polling tuning<br \/>\n#kern.polling.burst_max=1000<br \/>\n#kern.polling.each_burst=1000<br \/>\n#kern.polling.reg_frac=100<br \/>\n#kern.polling.user_frac=1<br \/>\n#kern.polling.idle_poll=0<\/p>\n<p># IPFW dynamic rules and timeouts tuning<br \/>\n# Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower<br \/>\nnet.inet.ip.fw.dyn_buckets=65536<br \/>\nnet.inet.ip.fw.dyn_max=65536<br \/>\nnet.inet.ip.fw.dyn_ack_lifetime=120<br \/>\nnet.inet.ip.fw.dyn_syn_lifetime=10<br \/>\nnet.inet.ip.fw.dyn_fin_lifetime=2<br \/>\nnet.inet.ip.fw.dyn_short_lifetime=10<br \/>\n# Make packets pass firewall only once when using dummynet<br \/>\n# i.e. packets going thru pipe are passing out from firewall with accept<br \/>\n#net.inet.ip.fw.one_pass=1<\/p>\n<p># shm_use_phys Wires all shared pages, making them unswappable<br \/>\n# Use this to lessen Virtual Memory Manager&#8217;s work when using Shared Mem.<br \/>\n# Useful for databases<br \/>\n#kern.ipc.shm_use_phys=1<\/p>\n<p># ZFS<br \/>\n# Enable prefetch. Useful for sequential load type i.e fileserver.<br \/>\n# FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and<br \/>\n# on any amd64 systems with less than 4GB of avaiable memory<br \/>\n# For additional info check this nabble thread <a title=\"http:\/\/old.nabble.com\/Samba-read-speed-performance-tuning-td27964534.html\" href=\"http:\/\/old.nabble.com\/Samba-read-speed-performance-tuning-td27964534.html\">http:\/\/old.nabble.com\/Samba-read-speed-performance-tuning-td27964534.html<\/a><br \/>\n#vfs.zfs.prefetch_disable=0<\/p>\n<p># On highload servers you may notice folowing message in dmesg:<br \/>\n# &#8220;Approaching the limit on PV entries, consider increasing either the<br \/>\n# vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable&#8221;<br \/>\n#vm.pmap.shpgperproc=500<br \/>\n# ==================================================================================================<\/p>\n<p><strong>Below is a sample loader.conf <\/strong><\/p>\n<p>$cat \/boot\/loader.conf<\/p>\n<p># ==================================================================================================<br \/>\n# Accept filters for data, http and DNS requests<br \/>\n# Usefull when your software uses select() instead of kevent\/kqueue or when you under DDoS<br \/>\n# DNS accf available on 8.0+<br \/>\naccf_data_load=&#8221;YES&#8221;<br \/>\naccf_http_load=&#8221;YES&#8221;<br \/>\naccf_dns_load=&#8221;YES&#8221;<\/p>\n<p># Async IO system calls<br \/>\naio_load=&#8221;YES&#8221;<\/p>\n<p># Adds NCQ support in FreeBSD<br \/>\n# WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+<br \/>\n# 8.0+ only<br \/>\n#ahci_load=<br \/>\n#siis_load=<\/p>\n<p># Increase kernel memory size to 3G.<br \/>\n#<br \/>\n# Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM<br \/>\n# Otherwise panic will happen on next reboot!<br \/>\n#<br \/>\n# It&#8217;s required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc<br \/>\n# Useful on highload stateful firewalls, proxies or ZFS fileservers<br \/>\n# (FreeBSD 7.2+ amd64 users: Check that current value is lower!)<br \/>\n#vm.kmem_size=&#8221;3G&#8221;<\/p>\n<p># Older versions of FreeBSD can&#8217;t tune maxfiles on the fly<br \/>\n#kern.maxfiles=&#8221;200000&#8243;<\/p>\n<p># Useful for databases<br \/>\n# Sets maximum data size to 1G<br \/>\n# (FreeBSD 7.2+ amd64 users: Check that current value is lower!)<br \/>\n#kern.maxdsiz=&#8221;1G&#8221;<\/p>\n<p># Maximum buffer size(vfs.maxbufspace)<br \/>\n# You can check current one via vfs.bufspace<br \/>\n# Should be lowered\/upped depending on server&#8217;s load-type<br \/>\n# Usually decreased to preserve kmem<br \/>\n# (default is 200M)<br \/>\n#kern.maxbcache=&#8221;512M&#8221;<\/p>\n<p># Sendfile buffers<br \/>\n# For i386 only<br \/>\n#kern.ipc.nsfbufs=10240<\/p>\n<p># syncache Hash table tuning<br \/>\nnet.inet.tcp.syncache.hashsize=1024<br \/>\nnet.inet.tcp.syncache.bucketlimit=100<\/p>\n<p># Incresed hostcache<br \/>\nnet.inet.tcp.hostcache.hashsize=&#8221;16384&#8243;<br \/>\nnet.inet.tcp.hostcache.bucketlimit=&#8221;100&#8243;<\/p>\n<p># TCP control-block Hash table tuning<br \/>\nnet.inet.tcp.tcbhashsize=4096<\/p>\n<p># Enable superpages, for 7.2+ only<br \/>\n# Also read <a title=\"http:\/\/lists.freebsd.org\/pipermail\/freebsd-hackers\/2009-November\/030094.html\" href=\"http:\/\/lists.freebsd.org\/pipermail\/freebsd-hackers\/2009-November\/030094.html\">http:\/\/lists.freebsd.org\/pipermail\/freebsd-hackers\/2009-November\/030094&#8230;.<\/a><br \/>\nvm.pmap.pg_ps_enabled=1<\/p>\n<p># Usefull if you are using Intel-Gigabit NIC<br \/>\n#hw.em.rxd=4096<br \/>\n#hw.em.txd=4096<br \/>\n#hw.em.rx_process_limit=&#8221;-1&#8243;<br \/>\n# Also if you have ALOT interrupts on NIC &#8211; play with following parameters<br \/>\n# NOTE: You should set them for every NIC<br \/>\n#dev.em.0.rx_int_delay: 250<br \/>\n#dev.em.0.tx_int_delay: 250<br \/>\n#dev.em.0.rx_abs_int_delay: 250<br \/>\n#dev.em.0.tx_abs_int_delay: 250<br \/>\n# There is also multithreaded version of em drivers can be found here:<br \/>\n# <a title=\"http:\/\/people.yandex-team.ru\/~wawa\/\" href=\"http:\/\/people.yandex-team.ru\/%7Ewawa\/\">http:\/\/people.yandex-team.ru\/~wawa\/<\/a><br \/>\n#<br \/>\n# for additional em monitoring and statistics use<br \/>\n# `sysctl dev.em.0.stats=1 ; dmesg`<br \/>\n#<br \/>\n#Same tunings for igb<br \/>\n#hw.igb.rxd=4096<br \/>\n#hw.igb.txd=4096<br \/>\n#hw.igb.rx_process_limit=100<\/p>\n<p># Some useful netisr tunables. See sysctl net.isr<br \/>\n#net.isr.defaultqlimit=4096<br \/>\n#net.isr.maxqlimit: 10240<br \/>\n# Bind netisr threads to CPUs<br \/>\n#net.isr.bindthreads=1<\/p>\n<p>#<br \/>\n# FreeBSD 9.x+<br \/>\n# Increase interface send queue length<br \/>\n# See commit message <a title=\"http:\/\/svn.freebsd.org\/viewvc\/base?view=revision&amp;revision=207554\" href=\"http:\/\/svn.freebsd.org\/viewvc\/base?view=revision&amp;revision=207554\">http:\/\/svn.freebsd.org\/viewvc\/base?view=revision&amp;revision=207554<\/a><br \/>\n#net.link.ifqmaxlen=1024<\/p>\n<p># Nicer boot logo =)<br \/>\nloader_logo=&#8221;beastie&#8221;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Just make a note,\u00a0 better than forgot.. #========================================================================================= # $FreeBSD: src\/etc\/sysctl.conf,v 1.8.32.1 2009\/04\/15 03:14:26 kensmith Exp $ # #\u00a0 This file is read when going to multi-user and its contents piped thru #\u00a0 &#8220;sysctl&#8221; to adjust kernel values.\u00a0 &#8220;man 5 sysctl.conf&#8221; for details. # \u00a0 Uncomment this to prevent users from seeing information about processes &hellip; <a href=\"https:\/\/dosen.unila.ac.id\/gigih\/2011\/03\/08\/small-note-about-etcsysctl-conf-tuning-on-freebsd\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Small Note about \/etc\/sysctl.conf tuning on FreeBSD&#8221;<\/span><\/a><\/p>\n","protected":false},"author":25,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1348,9],"tags":[],"class_list":["post-334","post","type-post","status-publish","format-standard","hentry","category-kiat-sukses-menjadi-seorang-network-engineer-2","category-world-of-ict"],"_links":{"self":[{"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/posts\/334"}],"collection":[{"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/users\/25"}],"replies":[{"embeddable":true,"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/comments?post=334"}],"version-history":[{"count":0,"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/posts\/334\/revisions"}],"wp:attachment":[{"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/media?parent=334"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/categories?post=334"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dosen.unila.ac.id\/gigih\/wp-json\/wp\/v2\/tags?post=334"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}