List MIB
Name |
Description |
compat |
Compatibility code |
compat.ia32 |
ia32 mode |
compat.ia32.maxdsiz |
|
compat.ia32.maxssiz |
|
compat.ia32.maxvmem |
|
compat.linux |
Linux mode |
compat.linux.debug |
Log warnings from linux(4); or 0 to disable |
compat.linux.default_openfiles |
Default soft openfiles resource limit, or -1 for unlimited |
compat.linux.default_stacksize |
Default soft stack size resource limit, or -1 for unlimited |
compat.linux.dummy_rlimits |
Return dummy values for unsupported Linux-specific rlimits |
compat.linux.emul_path |
Linux runtime environment path |
compat.linux.ignore_ip_recverr |
Ignore enabling IP_RECVERR |
compat.linux.map_sched_prio |
Map scheduler priorities to Linux priorities (not POSIX compliant) |
compat.linux.osname |
Linux kernel OS name |
compat.linux.osrelease |
Linux kernel OS release |
compat.linux.oss_version |
Linux OSS version |
compat.linux.preserve_vstatus |
Preserve VSTATUS termios(4) flag |
compat.linux.setid_allowed |
Allow setuid/setgid on execve of Linux binary |
compat.linux.use_real_ifnames |
Use FreeBSD interface names instead of generating ethN aliases |
compat.linux32 |
32-bit Linux emulation |
compat.linux32.emulate_i386 |
Emulate the real i386 |
compat.linux32.maxdsiz |
|
compat.linux32.maxssiz |
|
compat.linux32.maxvmem |
|
compat.linuxkpi |
LinuxKPI parameters |
compat.linuxkpi.80211 |
LinuxKPI 802.11 compatibility layer |
compat.linuxkpi.80211.debug |
LinuxKPI 802.11 debug level |
compat.linuxkpi.80211.hw_crypto |
Enable LinuxKPI 802.11 hardware crypto offload |
compat.linuxkpi.80211.tkip |
Enable LinuxKPI 802.11 TKIP crypto offload |
compat.linuxkpi.80211.wlan0 |
VIF Information |
compat.linuxkpi.80211.wlan0.dump_stas |
Dump sta statistics of this vif |
compat.linuxkpi.amdgpu_abmlevel |
ABM level (0 = off (default), 1-4 = backlight reduction level) |
compat.linuxkpi.amdgpu_aspm |
ASPM support (1 = enable, 0 = disable, -1 = auto) |
compat.linuxkpi.amdgpu_async_gfx_ring |
Asynchronous GFX rings that could be configured with either different priorities (HP3D ring and LP3D ring), or equal priorities (0 = disabled, 1 = enabled (default)) |
compat.linuxkpi.amdgpu_audio |
Audio enable (-1 = auto, 0 = disable, 1 = enable) |
compat.linuxkpi.amdgpu_backlight |
Backlight control (0 = pwm, 1 = aux, -1 auto (default)) |
compat.linuxkpi.amdgpu_bad_page_threshold |
Bad page threshold(-1 = auto(default value), 0 = disable bad page retirement, -2 = ignore bad page threshold) |
compat.linuxkpi.amdgpu_bapm |
BAPM support (1 = enable, 0 = disable, -1 = auto) |
compat.linuxkpi.amdgpu_benchmark |
Run benchmark |
compat.linuxkpi.amdgpu_cg_mask |
Clockgating flags mask (0 = disable clock gating) |
compat.linuxkpi.amdgpu_cik_support |
CIK support (1 = enabled (default), 0 = disabled) |
compat.linuxkpi.amdgpu_compute_multipipe |
Force compute queues to be spread across pipes (1 = enable, 0 = disable, -1 = auto) |
compat.linuxkpi.amdgpu_dc |
Display Core driver (1 = enable, 0 = disable, -1 = auto (default)) |
compat.linuxkpi.amdgpu_dcdebugmask |
all debug options disabled (default)) |
compat.linuxkpi.amdgpu_dcfeaturemask |
all stable DC features enabled (default)) |
compat.linuxkpi.amdgpu_deep_color |
Deep Color support (1 = enable, 0 = disable (default)) |
compat.linuxkpi.amdgpu_discovery |
Allow driver to discover hardware IPs from IP Discovery table at the top of VRAM |
compat.linuxkpi.amdgpu_disp_priority |
Display Priority (0 = auto, 1 = normal, 2 = high) |
compat.linuxkpi.amdgpu_dpm |
DPM support (1 = enable, 0 = disable, -1 = auto) |
compat.linuxkpi.amdgpu_emu_mode |
Emulation mode, (1 = enable, 0 = disable) |
compat.linuxkpi.amdgpu_exp_hw_support |
experimental hw support (1 = enable, 0 = disable (default)) |
compat.linuxkpi.amdgpu_force_asic_type |
A non negative value used to specify the asic type for all supported GPUs |
compat.linuxkpi.amdgpu_forcelongtraining |
force memory long training |
compat.linuxkpi.amdgpu_freesync_video |
Enable freesync modesetting optimization feature (0 = off (default), 1 = on) |
compat.linuxkpi.amdgpu_fw_load_type |
firmware loading type (3 = rlc backdoor autoload if supported, 2 = smu load if supported, 1 = psp load, 0 = force direct if supported, -1 = auto) |
compat.linuxkpi.amdgpu_gartsize |
Size of GART to setup in megabytes (32, 64, etc., -1=auto) |
compat.linuxkpi.amdgpu_gpu_recovery |
Enable GPU recovery mechanism, (2 = advanced tdr mode, 1 = enable, 0 = disable, -1 = auto) |
compat.linuxkpi.amdgpu_gttsize |
Size of the GTT domain in megabytes (-1 = auto) |
compat.linuxkpi.amdgpu_hw_i2c |
hw i2c engine enable (0 = disable) |
compat.linuxkpi.amdgpu_ip_block_mask |
IP Block Mask (all blocks enabled (default)) |
compat.linuxkpi.amdgpu_job_hang_limit |
how much time allow a job hang and not drop it (default 0) |
compat.linuxkpi.amdgpu_lbpw |
Load Balancing Per Watt (LBPW) support (1 = enable, 0 = disable, -1 = auto) |
compat.linuxkpi.amdgpu_lockup_timeout |
GPU lockup timeout in ms (default: for bare metal 10000 for non-compute jobs and 60000 for compute jobs; for passthrough or sriov, 10000 for all jobs. 0: keep default value. negative: infinity timeout), format: for bare metal [Non-Compute] or [GFX,Compute,SDMA,Video]; for passthrough or sriov [all jobs] or [GFX,Compute,SDMA,Video]. |
compat.linuxkpi.amdgpu_mcbp |
Enable Mid-command buffer preemption (0 = disabled (default), 1 = enabled) |
compat.linuxkpi.amdgpu_mes |
Enable Micro Engine Scheduler (0 = disabled (default), 1 = enabled) |
compat.linuxkpi.amdgpu_mes_kiq |
Enable Micro Engine Scheduler KIQ (0 = disabled (default), 1 = enabled) |
compat.linuxkpi.amdgpu_moverate |
Maximum buffer migration rate in MB/s. (32, 64, etc., -1=auto, 0=1=disabled) |
compat.linuxkpi.amdgpu_msi |
MSI support (1 = enable, 0 = disable, -1 = auto) |
compat.linuxkpi.amdgpu_noretry |
Disable retry faults (0 = retry enabled, 1 = retry disabled, -1 auto (default)) |
compat.linuxkpi.amdgpu_num_kcq |
number of kernel compute queue user want to setup (8 if set to greater than 8 or less than 0, only affect gfx 8+) |
compat.linuxkpi.amdgpu_pcie_gen2 |
PCIE Gen2 mode (-1 = auto, 0 = disable, 1 = enable) |
compat.linuxkpi.amdgpu_pcie_gen_cap |
PCIE Gen Caps (0: autodetect (default)) |
compat.linuxkpi.amdgpu_pcie_lane_cap |
PCIE Lane Caps (0: autodetect (default)) |
compat.linuxkpi.amdgpu_pg_mask |
Powergating flags mask (0 = disable power gating) |
compat.linuxkpi.amdgpu_ppfeaturemask |
all power features enabled (default)) |
compat.linuxkpi.amdgpu_ras_enable |
Enable RAS features on the GPU (0 = disable, 1 = enable, -1 = auto (default)) |
compat.linuxkpi.amdgpu_ras_mask |
Mask of RAS features to enable (default 0xffffffff), only valid when ras_enable == 1 |
compat.linuxkpi.amdgpu_reset_method |
GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 = mode2, 4 = baco/bamaco) |
compat.linuxkpi.amdgpu_runpm |
PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = auto) |
compat.linuxkpi.amdgpu_sched_hw_submission |
the max number of HW submissions (default 2) |
compat.linuxkpi.amdgpu_sched_jobs |
the max number of jobs supported in the sw queue (default 32) |
compat.linuxkpi.amdgpu_sdma_phase_quantum |
SDMA context switch phase quantum (x 1K GPU clock cycles, 0 = no change (default 32)) |
compat.linuxkpi.amdgpu_sg_display |
S/G Display (-1 = auto (default), 0 = disable) |
compat.linuxkpi.amdgpu_si_support |
SI support (1 = enabled (default), 0 = disabled) |
compat.linuxkpi.amdgpu_smu_memory_pool_size |
reserve gtt for smu debug usage, 0 = disable,0x1 = 256Mbyte, 0x2 = 512Mbyte, 0x4 = 1 Gbyte, 0x8 = 2GByte |
compat.linuxkpi.amdgpu_smu_pptable_id |
specify pptable id to be used (-1 = auto(default) value, 0 = use pptable from vbios, > 0 = soft pptable id) |
compat.linuxkpi.amdgpu_test |
Run tests |
compat.linuxkpi.amdgpu_timeout_fatal_disable |
disable watchdog timeout fatal error (false = default) |
compat.linuxkpi.amdgpu_timeout_period |
watchdog timeout period (0 = timeout disabled, 1 ~ 0x23 = timeout maxcycles = (1 << period) |
compat.linuxkpi.amdgpu_tmz |
Enable TMZ feature (-1 = auto (default), 0 = off, 1 = on) |
compat.linuxkpi.amdgpu_use_xgmi_p2p |
Enable XGMI P2P interface (0 = disable; 1 = enable (default)) |
compat.linuxkpi.amdgpu_vcnfw_log |
Enable vcnfw log(0 = disable (default value), 1 = enable) |
compat.linuxkpi.amdgpu_vis_vramlimit |
Restrict visible VRAM for testing, in megabytes |
compat.linuxkpi.amdgpu_visualconfirm |
Visual confirm (0 = off (default), 1 = MPO, 5 = PSR) |
compat.linuxkpi.amdgpu_vm_block_size |
VM page table size in bits (default depending on vm_size) |
compat.linuxkpi.amdgpu_vm_debug |
Debug VM handling (0 = disabled (default), 1 = enabled) |
compat.linuxkpi.amdgpu_vm_fault_stop |
Stop on VM fault (0 = never (default), 1 = print first, 2 = always) |
compat.linuxkpi.amdgpu_vm_fragment_size |
VM fragment size in bits (4, 5, etc. 4 = 64K (default), Max 9 = 2M) |
compat.linuxkpi.amdgpu_vm_size |
VM address space size in gigabytes (default 64GB) |
compat.linuxkpi.amdgpu_vm_update_mode |
VM update using CPU (0 = never (default except for large BAR(LB)), 1 = Graphics only, 2 = Compute only (default for LB), 3 = Both |
compat.linuxkpi.amdgpu_vramlimit |
Restrict VRAM for testing, in megabytes |
compat.linuxkpi.debug |
Set to enable pr_debug() prints. Clear to disable. |
compat.linuxkpi.drm_debug |
Enable debug output, where each bit enables a debug category.
Bit 0 (0x01) will enable CORE messages (drm core code)
Bit 1 (0x02) will enable DRIVER messages (drm controller code)
Bit 2 (0x04) will enable KMS messages (modesetting code)
Bit 3 (0x08) will enable PRIME messages (prime code)
Bit 4 (0x10) will enable ATOMIC messages (atomic code)
Bit 5 (0x20) will enable VBL messages (vblank code)
Bit 7 (0x80) will enable LEASE messages (leasing code)
Bit 8 (0x100) will enable DP messages (displayport code) |
compat.linuxkpi.drm_dp_aux_i2c_speed_khz |
Assumed speed of the i2c bus in kHz, (1-400, default 10) |
compat.linuxkpi.drm_dp_aux_i2c_transfer_size |
Number of bytes to transfer in a single I2C over DP AUX CH message, (1-16, default 16) |
compat.linuxkpi.drm_drm_fbdev_overalloc |
Overallocation of the fbdev buffer (%) [default=100] |
compat.linuxkpi.drm_edid_fixup |
Minimum number of valid EDID header bytes (0-8, default 6) |
compat.linuxkpi.drm_fbdev_emulation |
Enable legacy fbdev emulation [default=true] |
compat.linuxkpi.drm_poll |
help drm kms poll |
compat.linuxkpi.drm_timestamp_precision_usec |
Max. error on timestamps [usecs] |
compat.linuxkpi.drm_vblankoffdelay |
Delay until vblank irq auto-disable [msecs] (0: never disable, <0: disable immediately) |
compat.linuxkpi.iwlwifi_11n_disable |
disable 11n functionality, bitmap: 1: full, 2: disable agg TX, 4: disable agg RX, 8 enable agg TX |
compat.linuxkpi.iwlwifi_amsdu_size |
amsdu size 0: 12K for multi Rx queue devices, 2K for AX210 devices, 4K for other devices 1:4K 2:8K 3:12K (16K buffers) 4: 2K (default 0) |
compat.linuxkpi.iwlwifi_bt_coex_active |
enable wifi/bt co-exist (default: enable) |
compat.linuxkpi.iwlwifi_debug |
debug output mask |
compat.linuxkpi.iwlwifi_disable_11ac |
Disable VHT capabilities (default: false) |
compat.linuxkpi.iwlwifi_disable_11ax |
Disable HE capabilities (default: false) |
compat.linuxkpi.iwlwifi_disable_11be |
Disable EHT capabilities (default: false) |
compat.linuxkpi.iwlwifi_enable_ini |
0:disable, 1-15:FW_DBG_PRESET Values, 16:enabled without preset value defined,Debug INI TLV FW debug infrastructure (default: 16) |
compat.linuxkpi.iwlwifi_fw_restart |
restart firmware in case of error (default true) |
compat.linuxkpi.iwlwifi_led_mode |
0=system default, 1=On(RF On)/Off(RF Off), 2=blinking, 3=Off (default: 0) |
compat.linuxkpi.iwlwifi_mvm_init_dbg |
set to true to debug an ASSERT in INIT fw (default: false |
compat.linuxkpi.iwlwifi_mvm_power_scheme |
power management scheme: 1-active, 2-balanced, 3-low power, default: 2 |
compat.linuxkpi.iwlwifi_nvm_file |
NVM file name |
compat.linuxkpi.iwlwifi_pci_ids_name |
iwlwifi PCI IDs and names |
compat.linuxkpi.iwlwifi_power_level |
default power save level (range from 1 - 5, default: 1) |
compat.linuxkpi.iwlwifi_power_save |
enable WiFi power management (default: disable) |
compat.linuxkpi.iwlwifi_remove_when_gone |
Remove dev from PCIe bus if it is deemed inaccessible (default: false) |
compat.linuxkpi.iwlwifi_swcrypto |
using crypto in software (default 0 [hardware]) |
compat.linuxkpi.iwlwifi_uapsd_disable |
disable U-APSD functionality bitmap 1: BSS 2: P2P Client (default: 3) |
compat.linuxkpi.lkpi_pci_nseg1_fail |
Count of busdma mapping failures of single-segment |
compat.linuxkpi.mlx4_enable_4k_uar |
Enable using 4K UAR. Should not be enabled if have VFs which do not support 4K UARs (default: false) |
compat.linuxkpi.mlx4_enable_64b_cqe_eqe |
Enable 64 byte CQEs/EQEs when the FW supports this (default: True) |
compat.linuxkpi.mlx4_enable_qos |
Enable Enhanced QoS support (default: off) |
compat.linuxkpi.mlx4_inline_thold |
Threshold for using inline data (range: 17-104, default: 104) |
compat.linuxkpi.mlx4_internal_err_reset |
Reset device on internal errors if non-zero (default 1) |
compat.linuxkpi.mlx4_log_mtts_per_seg |
Log2 number of MTT entries per segment (1-7) |
compat.linuxkpi.mlx4_log_num_mac |
Log2 max number of MACs per ETH port (1-7) |
compat.linuxkpi.mlx4_log_num_mgm_entry_size |
log mgm size, that defines the num of qp per mcg, for example: 10 gives 248.range: 7 <= log_num_mgm_entry_size <= 12. To activate device managed flow steering when available, set to -1 |
compat.linuxkpi.mlx4_log_num_vlan |
Log2 max number of VLANs per ETH port (0-7) |
compat.linuxkpi.mlx4_msi_x |
attempt to use MSI-X if nonzero |
compat.linuxkpi.mlx4_pfcrx |
Priority based Flow Control policy on RX[7:0]. Per priority bit mask |
compat.linuxkpi.mlx4_pfctx |
Priority based Flow Control policy on TX[7:0]. Per priority bit mask |
compat.linuxkpi.mlx4_udp_rss |
Enable RSS for incoming UDP traffic or disabled (0) |
compat.linuxkpi.mlx4_use_prio |
Enable steering by VLAN priority on ETH ports (deprecated) |
compat.linuxkpi.net_ratelimit |
Limit number of LinuxKPI net messages per second. |
compat.linuxkpi.pci_ids_name |
iwlwifi PCI IDs and names |
compat.linuxkpi.rcu_debug |
Set to enable RCU warning. Clear to disable. |
compat.linuxkpi.skb |
LinuxKPI skbuff |
compat.linuxkpi.skb.debug |
SKB debug level |
compat.linuxkpi.skb.mem_limit |
SKB memory limit: 0=no limit, 1=32bit, 2=36bit, other=undef (currently 32bit) |
compat.linuxkpi.task_struct_reserve |
Number of struct task and struct mm to reserve for non-sleepable allocations |
compat.linuxkpi.ttm_dma32_pages_limit |
Limit for the allocated DMA32 pages |
compat.linuxkpi.ttm_page_pool_size |
Number of pages in the WC/UC/DMA pool |
compat.linuxkpi.ttm_pages_limit |
Limit for the allocated pages |
compat.linuxkpi.warn_dump_stack |
Set to enable stack traces from WARN_ON(). Clear to disable. |
debug |
Debugging |
debug.acpi |
ACPI debugging |
debug.acpi.acpi_ca_version |
Version of Intel ACPI-CA |
debug.acpi.batt |
Battery debugging |
debug.acpi.batt.batt_sleep_ms |
Sleep during battery status updates to prevent keystroke loss. |
debug.acpi.default_register_width |
Ignore register widths set by FADT |
debug.acpi.ec |
EC debugging |
debug.acpi.ec.burst |
Enable use of burst mode (faster for nearly all systems) |
debug.acpi.ec.polled |
Force use of polled mode (only if interrupt mode doesn't work) |
debug.acpi.ec.timeout |
Total time spent waiting for a response (poll+sleep) |
debug.acpi.enable_debug_objects |
Enable Debug objects |
debug.acpi.interpreter_slack |
Turn on interpreter slack mode. |
debug.acpi.max_tasks |
Maximum acpi tasks |
debug.acpi.max_threads |
Maximum acpi threads |
debug.acpi.resume_beep |
Beep the PC speaker when resuming |
debug.acpi.suspend_bounce |
Don't actually suspend, just test devices. |
debug.acpi.tasks_hiwater |
Peak demand for ACPI event task slots. |
debug.adaptive_machine_arch |
Adapt reported machine architecture to the ABI of the binary |
debug.allow_insane_settime |
do not perform possibly restrictive checks on settime(2) args |
debug.bigcgs |
|
debug.bioq_batchsize |
BIOQ batch size |
debug.boothowto |
Boot control flags, passed from loader |
debug.bootverbose |
Control the output of verbose kernel messages |
debug.clock_do_io |
Trigger one-time IO on RTC clocks; 1=read (and discard), 2=write |
debug.clock_show_io |
Enable debug printing of RTC clock I/O; 1=reads, 2=writes, 3=both. |
debug.clocktime |
Enable printing of clocktime debugging |
debug.cpufreq |
cpufreq debugging |
debug.cpufreq.lowest |
Don't provide levels below this frequency. |
debug.cpufreq.verbose |
Print verbose debugging messages |
debug.ddb |
DDB settings |
debug.ddb.capture |
DDB capture options |
debug.ddb.capture.bufoff |
Bytes of data in DDB capture buffer |
debug.ddb.capture.bufsize |
Size of DDB capture buffer |
debug.ddb.capture.data |
DDB capture data |
debug.ddb.capture.inprogress |
DDB output capture in progress |
debug.ddb.capture.maxbufsize |
Maximum value for debug.ddb.capture.bufsize |
debug.ddb.prioritize_control_input |
Drop input when the buffer fills in order to keep servicing ^C/^S/^Q |
debug.ddb.scripting |
DDB script settings |
debug.ddb.scripting.script |
Set a script |
debug.ddb.scripting.scripts |
List of defined scripts |
debug.ddb.scripting.unscript |
Unset a script |
debug.ddb.textdump |
DDB textdump options |
debug.ddb.textdump.do_config |
Dump kernel configuration in textdump |
debug.ddb.textdump.do_ddb |
Dump DDB captured output in textdump |
debug.ddb.textdump.do_msgbuf |
Dump kernel message buffer in textdump |
debug.ddb.textdump.do_panic |
Dump kernel panic message in textdump |
debug.ddb.textdump.do_version |
Dump kernel version string in textdump |
debug.ddb.textdump.pending |
Perform textdump instead of regular kernel dump. |
debug.ddb_use_printf |
use printf for all ddb output |
debug.deadlkres |
Deadlock resolver |
debug.deadlkres.blktime_threshold |
Number of seconds within is valid to block on a turnstile |
debug.deadlkres.sleepfreq |
Number of seconds between any deadlock resolver thread run |
debug.deadlkres.slptime_threshold |
Number of seconds within is valid to sleep on a sleepqueue |
debug.debugger_on_panic |
Run debugger on kernel panic |
debug.debugger_on_recursive_panic |
Run debugger on recursive kernel panic |
debug.debugger_on_trap |
Run debugger on kernel trap before panic |
debug.devfs_iosize_max_clamp |
Clamp max i/o size to INT_MAX for devices |
debug.dircheck |
|
debug.dobkgrdwrite |
Do background writes (honoring the BV_BKGRDWRITE flag)? |
debug.dump_modinfo |
pretty-print the bootloader metadata |
debug.efi_time |
|
debug.elf32_legacy_coredump |
include all and only RW pages in core dumps |
debug.elf64_legacy_coredump |
include all and only RW pages in core dumps |
debug.fail_point |
fail points |
debug.fail_point.fill_kinfo_vnode__random_path |
|
debug.fail_point.g_mirror_metadata_write |
|
debug.fail_point.g_mirror_regular_request_delete |
|
debug.fail_point.g_mirror_regular_request_flush |
|
debug.fail_point.g_mirror_regular_request_read |
|
debug.fail_point.g_mirror_regular_request_speedup |
|
debug.fail_point.g_mirror_regular_request_write |
|
debug.fail_point.g_mirror_sync_request_read |
|
debug.fail_point.g_mirror_sync_request_write |
|
debug.fail_point.ioat_release |
|
debug.fail_point.nfscl_force_fileid_warning |
|
debug.fail_point.nlm_deny_grant |
|
debug.fail_point.random_fortuna_pre_read |
|
debug.fail_point.status_fill_kinfo_vnode__random_path |
|
debug.fail_point.status_g_mirror_metadata_write |
|
debug.fail_point.status_g_mirror_regular_request_delete |
|
debug.fail_point.status_g_mirror_regular_request_flush |
|
debug.fail_point.status_g_mirror_regular_request_read |
|
debug.fail_point.status_g_mirror_regular_request_speedup |
|
debug.fail_point.status_g_mirror_regular_request_write |
|
debug.fail_point.status_g_mirror_sync_request_read |
|
debug.fail_point.status_g_mirror_sync_request_write |
|
debug.fail_point.status_ioat_release |
|
debug.fail_point.status_nfscl_force_fileid_warning |
|
debug.fail_point.status_nlm_deny_grant |
|
debug.fail_point.status_random_fortuna_pre_read |
|
debug.fail_point.status_sysctl_running |
|
debug.fail_point.status_test_fail_point |
|
debug.fail_point.sysctl_running |
|
debug.fail_point.test_fail_point |
|
debug.fail_point.test_trigger_fail_point |
Trigger test fail points |
debug.fdc |
fdc driver |
debug.fdc.debugflags |
Debug flags |
debug.fdc.fifo |
FIFO threshold setting |
debug.fdc.retries |
Number of retries to attempt |
debug.fdc.settle |
Head settling time in sec/hz |
debug.fdc.spec1 |
Specification byte one (step-rate + head unload) |
debug.fdc.spec2 |
Specification byte two (head load time + no-dma) |
debug.firmware_max_size |
Max size permitted for a firmware file. |
debug.ftry_reclaim_vnode |
Try to reclaim a vnode by its file descriptor |
debug.gdb |
GDB settings |
debug.gdb.cons |
copy console messages to GDB |
debug.gdb.netgdb |
NetGDB parameters |
debug.gdb.netgdb.debug |
Debug message verbosity (0: off; 1: on) |
debug.gdbcons |
copy console messages to GDB |
debug.hwpstate_intel0 |
|
debug.hwpstate_intel1 |
|
debug.hwpstate_intel10 |
|
debug.hwpstate_intel11 |
|
debug.hwpstate_intel2 |
|
debug.hwpstate_intel3 |
|
debug.hwpstate_intel4 |
|
debug.hwpstate_intel5 |
|
debug.hwpstate_intel6 |
|
debug.hwpstate_intel7 |
|
debug.hwpstate_intel8 |
|
debug.hwpstate_intel9 |
|
debug.hwpstate_pstate_limit |
If enabled (1), limit administrative control of P-states to the value in CurPstateLimit |
debug.hwpstate_verbose |
Debug hwpstate |
debug.hwpstate_verify |
Verify P-state after setting |
debug.if_tun_debug |
|
debug.ig4_dump |
Dump controller registers |
debug.ig4_timings |
Controller timings 0=ACPI, 1=predefined, 2=legacy, 3=do not change |
debug.iosize_max_clamp |
Clamp max i/o size to INT_MAX |
debug.ipw |
ipw debug level |
debug.iwi |
iwi debug level |
debug.kassert |
kassert options |
debug.kassert.do_kdb |
KASSERT will enter the debugger |
debug.kassert.do_log |
If warn_only is enabled, log (1) or do not log (0) assertion violations |
debug.kassert.kassert |
set to trigger a test kassert |
debug.kassert.log_mute_at |
max number of KASSERTS to log |
debug.kassert.log_panic_at |
max number of KASSERTS before we will panic |
debug.kassert.log_pps_limit |
limit number of log messages per second |
debug.kassert.suppress_in_panic |
KASSERTs will be suppressed while handling a panic |
debug.kassert.warn_only |
KASSERT triggers a panic (0) or just a warning (1) |
debug.kassert.warnings |
number of KASSERTs that have been triggered |
debug.kdb |
KDB nodes |
debug.kdb.alt_break_to_debugger |
Enable alternative break to debugger |
debug.kdb.available |
list of available KDB backends |
debug.kdb.break_to_debugger |
Enable break to debugger |
debug.kdb.current |
currently selected KDB backend |
debug.kdb.enter |
set to enter the debugger |
debug.kdb.enter_securelevel |
Maximum securelevel to enter a KDB backend |
debug.kdb.panic |
set to panic the kernel |
debug.kdb.panic_str |
trigger a kernel panic, using the provided string as the panic message |
debug.kdb.stack_overflow |
set to cause a stack overflow |
debug.kdb.trap |
set to cause a page fault via data access |
debug.kdb.trap_code |
set to cause a page fault via code access |
debug.link_elf_leak_locals |
Allow local symbols to participate in global module symbol resolution |
debug.link_elf_obj_leak_locals |
Allow local symbols to participate in global module symbol resolution |
debug.lock |
lock debugging |
debug.lock.delay |
lock delay |
debug.lock.delay.restrict_starvation |
|
debug.lock.delay.starvation_limit |
|
debug.lock.delay_base |
|
debug.lock.delay_loops |
|
debug.lock.delay_max |
|
debug.lock.delay_retries |
|
debug.lockmgr |
lockmgr debugging |
debug.lockmgr.adaptive_spinning |
|
debug.malloc |
Kernel malloc debugging options |
debug.malloc.numzones |
Number of malloc uma subzones |
debug.malloc.zone_offset |
Separate malloc types by examining the Nth character in the malloc type short description. |
debug.max_vnlru_free |
limit on vnode free requests per call to the vnlru_free routine (legacy) |
debug.mddebug |
Enable md(4) debug messages |
debug.minidump |
Enable mini crash dumps |
debug.nchash |
Size of namecache hash table |
debug.ncores |
Maximum number of generated process corefiles while using index format |
debug.nlm_debug |
|
debug.nullfs_bug_bypass |
|
debug.obsolete_panic |
Panic when obsolete features are used (0 = never, 1 = if obsolete, 2 = if deprecated) |
debug.osd |
OSD debug level |
debug.psm |
ps/2 mouse |
debug.psm.errsecs |
Number of seconds during which packets will dropped after a sync error |
debug.psm.errusecs |
Microseconds to add to psmerrsecs |
debug.psm.hz |
Frequency of the softcallout (in hz) |
debug.psm.loglevel |
Verbosity level |
debug.psm.pkterrthresh |
Number of error packets allowed before reinitializing the mouse |
debug.psm.secs |
Max number of seconds between soft interrupts |
debug.psm.usecs |
Microseconds to add to psmsecs |
debug.ptrace_attach_transparent |
Hide wakes from PT_ATTACH on interruptible sleeps |
debug.rangelock_cheat |
|
debug.rman_debug |
rman debug |
debug.rush_requests |
Number of times I/O speeded up (rush requests) |
debug.sizeof |
Sizeof various things |
debug.sizeof.bio |
sizeof(struct bio) |
debug.sizeof.buf |
sizeof(struct buf) |
debug.sizeof.cdev |
sizeof(struct cdev) |
debug.sizeof.cdev_priv |
sizeof(struct cdev_priv) |
debug.sizeof.devstat |
sizeof(struct devstat) |
debug.sizeof.g_bioq |
sizeof(struct g_bioq) |
debug.sizeof.g_class |
sizeof(struct g_class) |
debug.sizeof.g_consumer |
sizeof(struct g_consumer) |
debug.sizeof.g_geom |
sizeof(struct g_geom) |
debug.sizeof.g_provider |
sizeof(struct g_provider) |
debug.sizeof.kinfo_proc |
sizeof(struct kinfo_proc) |
debug.sizeof.namecache |
sizeof(struct namecache) |
debug.sizeof.pcb |
sizeof(struct pcb) |
debug.sizeof.proc |
sizeof(struct proc) |
debug.sizeof.vnode |
sizeof(struct vnode) |
debug.sizeof.znode |
sizeof(znode_t) |
debug.smr |
SMR Stats |
debug.smr.advance |
|
debug.smr.advance_wait |
|
debug.smr.poll |
|
debug.smr.poll_fail |
|
debug.smr.poll_scan |
|
debug.softdep |
soft updates stats |
debug.softdep.blk_limit_hit |
|
debug.softdep.blk_limit_push |
|
debug.softdep.cleanup_blkrequests |
|
debug.softdep.cleanup_failures |
|
debug.softdep.cleanup_high_delay |
|
debug.softdep.cleanup_inorequests |
|
debug.softdep.cleanup_retries |
|
debug.softdep.current |
current dependencies allocated |
debug.softdep.current.allocdirect |
|
debug.softdep.current.allocindir |
|
debug.softdep.current.bmsafemap |
|
debug.softdep.current.diradd |
|
debug.softdep.current.dirrem |
|
debug.softdep.current.freeblks |
|
debug.softdep.current.freedep |
|
debug.softdep.current.freefile |
|
debug.softdep.current.freefrag |
|
debug.softdep.current.freework |
|
debug.softdep.current.indirdep |
|
debug.softdep.current.inodedep |
|
debug.softdep.current.jaddref |
|
debug.softdep.current.jfreeblk |
|
debug.softdep.current.jfreefrag |
|
debug.softdep.current.jfsync |
|
debug.softdep.current.jmvref |
|
debug.softdep.current.jnewblk |
|
debug.softdep.current.jremref |
|
debug.softdep.current.jseg |
|
debug.softdep.current.jsegdep |
|
debug.softdep.current.jtrunc |
|
debug.softdep.current.mkdir |
|
debug.softdep.current.newblk |
|
debug.softdep.current.newdirblk |
|
debug.softdep.current.pagedep |
|
debug.softdep.current.sbdep |
|
debug.softdep.delayed_inactivations |
|
debug.softdep.dir_entry |
|
debug.softdep.direct_blk_ptrs |
|
debug.softdep.emptyjblocks |
|
debug.softdep.flush_threads |
|
debug.softdep.flushcache |
|
debug.softdep.highuse |
high use dependencies allocated |
debug.softdep.highuse.allocdirect |
|
debug.softdep.highuse.allocindir |
|
debug.softdep.highuse.bmsafemap |
|
debug.softdep.highuse.diradd |
|
debug.softdep.highuse.dirrem |
|
debug.softdep.highuse.freeblks |
|
debug.softdep.highuse.freedep |
|
debug.softdep.highuse.freefile |
|
debug.softdep.highuse.freefrag |
|
debug.softdep.highuse.freework |
|
debug.softdep.highuse.indirdep |
|
debug.softdep.highuse.inodedep |
|
debug.softdep.highuse.jaddref |
|
debug.softdep.highuse.jfreeblk |
|
debug.softdep.highuse.jfreefrag |
|
debug.softdep.highuse.jfsync |
|
debug.softdep.highuse.jmvref |
|
debug.softdep.highuse.jnewblk |
|
debug.softdep.highuse.jremref |
|
debug.softdep.highuse.jseg |
|
debug.softdep.highuse.jsegdep |
|
debug.softdep.highuse.jtrunc |
|
debug.softdep.highuse.mkdir |
|
debug.softdep.highuse.newblk |
|
debug.softdep.highuse.newdirblk |
|
debug.softdep.highuse.pagedep |
|
debug.softdep.highuse.sbdep |
|
debug.softdep.indir_blk_ptrs |
|
debug.softdep.ino_limit_hit |
|
debug.softdep.ino_limit_push |
|
debug.softdep.inode_bitmap |
|
debug.softdep.jaddref_rollback |
|
debug.softdep.jnewblk_rollback |
|
debug.softdep.journal_low |
|
debug.softdep.journal_min |
|
debug.softdep.journal_wait |
|
debug.softdep.jwait_filepage |
|
debug.softdep.jwait_freeblks |
|
debug.softdep.jwait_inode |
|
debug.softdep.jwait_newblk |
|
debug.softdep.max_softdeps |
|
debug.softdep.print_threads |
Notify flusher thread start/stop |
debug.softdep.sync_limit_hit |
|
debug.softdep.tickdelay |
|
debug.softdep.total |
total dependencies allocated |
debug.softdep.total.allocdirect |
|
debug.softdep.total.allocindir |
|
debug.softdep.total.bmsafemap |
|
debug.softdep.total.diradd |
|
debug.softdep.total.dirrem |
|
debug.softdep.total.freeblks |
|
debug.softdep.total.freedep |
|
debug.softdep.total.freefile |
|
debug.softdep.total.freefrag |
|
debug.softdep.total.freework |
|
debug.softdep.total.indirdep |
|
debug.softdep.total.inodedep |
|
debug.softdep.total.jaddref |
|
debug.softdep.total.jfreeblk |
|
debug.softdep.total.jfreefrag |
|
debug.softdep.total.jfsync |
|
debug.softdep.total.jmvref |
|
debug.softdep.total.jnewblk |
|
debug.softdep.total.jremref |
|
debug.softdep.total.jseg |
|
debug.softdep.total.jsegdep |
|
debug.softdep.total.jtrunc |
|
debug.softdep.total.mkdir |
|
debug.softdep.total.newblk |
|
debug.softdep.total.newdirblk |
|
debug.softdep.total.pagedep |
|
debug.softdep.total.sbdep |
|
debug.softdep.worklist_push |
|
debug.softdep.write |
current dependencies written |
debug.softdep.write.allocdirect |
|
debug.softdep.write.allocindir |
|
debug.softdep.write.bmsafemap |
|
debug.softdep.write.diradd |
|
debug.softdep.write.dirrem |
|
debug.softdep.write.freeblks |
|
debug.softdep.write.freedep |
|
debug.softdep.write.freefile |
|
debug.softdep.write.freefrag |
|
debug.softdep.write.freework |
|
debug.softdep.write.indirdep |
|
debug.softdep.write.inodedep |
|
debug.softdep.write.jaddref |
|
debug.softdep.write.jfreeblk |
|
debug.softdep.write.jfreefrag |
|
debug.softdep.write.jfsync |
|
debug.softdep.write.jmvref |
|
debug.softdep.write.jnewblk |
|
debug.softdep.write.jremref |
|
debug.softdep.write.jseg |
|
debug.softdep.write.jsegdep |
|
debug.softdep.write.jtrunc |
|
debug.softdep.write.mkdir |
|
debug.softdep.write.newblk |
|
debug.softdep.write.newdirblk |
|
debug.softdep.write.pagedep |
|
debug.softdep.write.sbdep |
|
debug.trace_all_panics |
Print stack traces on secondary kernel panics |
debug.trace_on_panic |
Print stack trace on kernel panic |
debug.try_reclaim_vnode |
Try to reclaim a vnode by its pathname |
debug.uart_force_poll |
Force UART polling |
debug.uart_poll_freq |
UART poll frequency |
debug.uma_reclaim |
set to generate request to reclaim uma caches |
debug.uma_reclaim_domain |
|
debug.umtx |
umtx debug |
debug.umtx.robust_faults_verbose |
|
debug.umtx.umtx_pi_allocated |
Allocated umtx_pi |
debug.vm_lowmem |
set to trigger vm_lowmem event with given flags |
debug.vmmap_check |
Enable vm map consistency checking |
debug.vn_io_fault_enable |
Enable vn_io_fault lock avoidance |
debug.vn_io_fault_prefault |
Enable vn_io_fault prefaulting |
debug.vn_io_faults |
Count of vn_io_fault lock avoidance triggers |
debug.vn_io_pgcache_read_enable |
Enable copying from page cache for reads, avoiding fs |
debug.vn_lock_pair_pause |
Count of vn_lock_pair deadlocks |
debug.vn_lock_pair_pause_max |
Max ticks for vn_lock_pair deadlock avoidance sleep |
debug.vnlru_nowhere |
Number of times the vnlru process ran without success |
debug.vnode_domainset |
Default vnode NUMA policy |
debug.witness |
Witness Locking |
debug.witness.badstacks |
Show bad witness stacks |
debug.witness.free_cnt |
|
debug.witness.fullgraph |
Show locks relation graphs |
debug.witness.kdb |
|
debug.witness.output_channel |
Output channel for warnings |
debug.witness.skipspin |
|
debug.witness.sleep_cnt |
|
debug.witness.spin_cnt |
|
debug.witness.trace |
|
debug.witness.watch |
witness is watching lock operations |
debug.witness.witness_count |
|
debug.x86bios |
x86bios debugging |
debug.x86bios.call |
Trace far function calls |
debug.x86bios.int |
Trace software interrupt handlers |
dev |
|
dev.acpi |
|
dev.acpi.%parent |
parent class |
dev.acpi.[num] |
|
dev.acpi.[num].%desc |
device description |
dev.acpi.[num].%driver |
device driver name |
dev.acpi.[num].%iommu |
iommu unit handling the device requests |
dev.acpi.[num].%location |
device location relative to parent |
dev.acpi.[num].%parent |
parent device |
dev.acpi.[num].%pnpinfo |
device identification |
dev.acpi_acad |
|
dev.acpi_acad.%parent |
parent class |
dev.acpi_acad.[num] |
|
dev.acpi_acad.[num].%desc |
device description |
dev.acpi_acad.[num].%driver |
device driver name |
dev.acpi_acad.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_acad.[num].%location |
device location relative to parent |
dev.acpi_acad.[num].%parent |
parent device |
dev.acpi_acad.[num].%pnpinfo |
device identification |
dev.acpi_button |
|
dev.acpi_button.%parent |
parent class |
dev.acpi_button.[num] |
|
dev.acpi_button.[num].%desc |
device description |
dev.acpi_button.[num].%driver |
device driver name |
dev.acpi_button.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_button.[num].%location |
device location relative to parent |
dev.acpi_button.[num].%parent |
parent device |
dev.acpi_button.[num].%pnpinfo |
device identification |
dev.acpi_ec |
|
dev.acpi_ec.%parent |
parent class |
dev.acpi_ec.[num] |
|
dev.acpi_ec.[num].%desc |
device description |
dev.acpi_ec.[num].%driver |
device driver name |
dev.acpi_ec.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_ec.[num].%location |
device location relative to parent |
dev.acpi_ec.[num].%parent |
parent device |
dev.acpi_ec.[num].%pnpinfo |
device identification |
dev.acpi_lid |
|
dev.acpi_lid.%parent |
parent class |
dev.acpi_lid.[num] |
|
dev.acpi_lid.[num].%desc |
device description |
dev.acpi_lid.[num].%driver |
device driver name |
dev.acpi_lid.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_lid.[num].%location |
device location relative to parent |
dev.acpi_lid.[num].%parent |
parent device |
dev.acpi_lid.[num].%pnpinfo |
device identification |
dev.acpi_lid.[num].state |
Device state (0 = closed, 1 = open) |
dev.acpi_perf |
|
dev.acpi_perf.%parent |
parent class |
dev.acpi_perf.[num] |
|
dev.acpi_perf.[num].%desc |
device description |
dev.acpi_perf.[num].%driver |
device driver name |
dev.acpi_perf.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_perf.[num].%location |
device location relative to parent |
dev.acpi_perf.[num].%parent |
parent device |
dev.acpi_perf.[num].%pnpinfo |
device identification |
dev.acpi_syscontainer |
|
dev.acpi_syscontainer.%parent |
parent class |
dev.acpi_syscontainer.[num] |
|
dev.acpi_syscontainer.[num].%desc |
device description |
dev.acpi_syscontainer.[num].%driver |
device driver name |
dev.acpi_syscontainer.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_syscontainer.[num].%location |
device location relative to parent |
dev.acpi_syscontainer.[num].%parent |
parent device |
dev.acpi_syscontainer.[num].%pnpinfo |
device identification |
dev.acpi_sysresource |
|
dev.acpi_sysresource.%parent |
parent class |
dev.acpi_sysresource.[num] |
|
dev.acpi_sysresource.[num].%desc |
device description |
dev.acpi_sysresource.[num].%driver |
device driver name |
dev.acpi_sysresource.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_sysresource.[num].%location |
device location relative to parent |
dev.acpi_sysresource.[num].%parent |
parent device |
dev.acpi_sysresource.[num].%pnpinfo |
device identification |
dev.acpi_timer |
|
dev.acpi_timer.%parent |
parent class |
dev.acpi_timer.[num] |
|
dev.acpi_timer.[num].%desc |
device description |
dev.acpi_timer.[num].%driver |
device driver name |
dev.acpi_timer.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_timer.[num].%location |
device location relative to parent |
dev.acpi_timer.[num].%parent |
parent device |
dev.acpi_timer.[num].%pnpinfo |
device identification |
dev.acpi_tz |
|
dev.acpi_tz.%parent |
parent class |
dev.acpi_tz.[num] |
|
dev.acpi_tz.[num].%desc |
device description |
dev.acpi_tz.[num].%driver |
device driver name |
dev.acpi_tz.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_tz.[num].%location |
device location relative to parent |
dev.acpi_tz.[num].%parent |
parent device |
dev.acpi_tz.[num].%pnpinfo |
device identification |
dev.acpi_wmi |
|
dev.acpi_wmi.%parent |
parent class |
dev.acpi_wmi.[num] |
|
dev.acpi_wmi.[num].%desc |
device description |
dev.acpi_wmi.[num].%driver |
device driver name |
dev.acpi_wmi.[num].%iommu |
iommu unit handling the device requests |
dev.acpi_wmi.[num].%location |
device location relative to parent |
dev.acpi_wmi.[num].%parent |
parent device |
dev.acpi_wmi.[num].%pnpinfo |
device identification |
dev.acpi_wmi.[num].bmof |
MOF Blob |
dev.aesni |
|
dev.aesni.%parent |
parent class |
dev.aesni.[num] |
|
dev.aesni.[num].%desc |
device description |
dev.aesni.[num].%driver |
device driver name |
dev.aesni.[num].%iommu |
iommu unit handling the device requests |
dev.aesni.[num].%location |
device location relative to parent |
dev.aesni.[num].%parent |
parent device |
dev.aesni.[num].%pnpinfo |
device identification |
dev.ahci |
|
dev.ahci.%parent |
parent class |
dev.ahci.[num] |
|
dev.ahci.[num].%desc |
device description |
dev.ahci.[num].%domain |
NUMA domain |
dev.ahci.[num].%driver |
device driver name |
dev.ahci.[num].%iommu |
iommu unit handling the device requests |
dev.ahci.[num].%location |
device location relative to parent |
dev.ahci.[num].%parent |
parent device |
dev.ahci.[num].%pnpinfo |
device identification |
dev.ahcich |
|
dev.ahcich.%parent |
parent class |
dev.ahcich.[num] |
|
dev.ahcich.[num].%desc |
device description |
dev.ahcich.[num].%domain |
NUMA domain |
dev.ahcich.[num].%driver |
device driver name |
dev.ahcich.[num].%iommu |
iommu unit handling the device requests |
dev.ahcich.[num].%location |
device location relative to parent |
dev.ahcich.[num].%parent |
parent device |
dev.ahcich.[num].%pnpinfo |
device identification |
dev.ahcich.[num].disable_phy |
Disable PHY |
dev.ahciem |
|
dev.ahciem.%parent |
parent class |
dev.ahciem.[num] |
|
dev.ahciem.[num].%desc |
device description |
dev.ahciem.[num].%domain |
NUMA domain |
dev.ahciem.[num].%driver |
device driver name |
dev.ahciem.[num].%iommu |
iommu unit handling the device requests |
dev.ahciem.[num].%location |
device location relative to parent |
dev.ahciem.[num].%parent |
parent device |
dev.ahciem.[num].%pnpinfo |
device identification |
dev.apei |
|
dev.apei.%parent |
parent class |
dev.apei.[num] |
|
dev.apei.[num].%desc |
device description |
dev.apei.[num].%driver |
device driver name |
dev.apei.[num].%iommu |
iommu unit handling the device requests |
dev.apei.[num].%location |
device location relative to parent |
dev.apei.[num].%parent |
parent device |
dev.apei.[num].%pnpinfo |
device identification |
dev.apic |
|
dev.apic.%parent |
parent class |
dev.apic.[num] |
|
dev.apic.[num].%desc |
device description |
dev.apic.[num].%driver |
device driver name |
dev.apic.[num].%iommu |
iommu unit handling the device requests |
dev.apic.[num].%location |
device location relative to parent |
dev.apic.[num].%parent |
parent device |
dev.apic.[num].%pnpinfo |
device identification |
dev.ata |
|
dev.ata.%parent |
parent class |
dev.ata.[num] |
|
dev.ata.[num].%desc |
device description |
dev.ata.[num].%driver |
device driver name |
dev.ata.[num].%iommu |
iommu unit handling the device requests |
dev.ata.[num].%location |
device location relative to parent |
dev.ata.[num].%parent |
parent device |
dev.ata.[num].%pnpinfo |
device identification |
dev.atapci |
|
dev.atapci.%parent |
parent class |
dev.atapci.[num] |
|
dev.atapci.[num].%desc |
device description |
dev.atapci.[num].%driver |
device driver name |
dev.atapci.[num].%iommu |
iommu unit handling the device requests |
dev.atapci.[num].%location |
device location relative to parent |
dev.atapci.[num].%parent |
parent device |
dev.atapci.[num].%pnpinfo |
device identification |
dev.atdma |
|
dev.atdma.%parent |
parent class |
dev.atdma.[num] |
|
dev.atdma.[num].%desc |
device description |
dev.atdma.[num].%driver |
device driver name |
dev.atdma.[num].%iommu |
iommu unit handling the device requests |
dev.atdma.[num].%location |
device location relative to parent |
dev.atdma.[num].%parent |
parent device |
dev.atdma.[num].%pnpinfo |
device identification |
dev.atkbd |
|
dev.atkbd.%parent |
parent class |
dev.atkbd.[num] |
|
dev.atkbd.[num].%desc |
device description |
dev.atkbd.[num].%domain |
NUMA domain |
dev.atkbd.[num].%driver |
device driver name |
dev.atkbd.[num].%iommu |
iommu unit handling the device requests |
dev.atkbd.[num].%location |
device location relative to parent |
dev.atkbd.[num].%parent |
parent device |
dev.atkbd.[num].%pnpinfo |
device identification |
dev.atkbdc |
|
dev.atkbdc.%parent |
parent class |
dev.atkbdc.[num] |
|
dev.atkbdc.[num].%desc |
device description |
dev.atkbdc.[num].%domain |
NUMA domain |
dev.atkbdc.[num].%driver |
device driver name |
dev.atkbdc.[num].%iommu |
iommu unit handling the device requests |
dev.atkbdc.[num].%location |
device location relative to parent |
dev.atkbdc.[num].%parent |
parent device |
dev.atkbdc.[num].%pnpinfo |
device identification |
dev.atkbdc.[num].wake |
Device set to wake the system |
dev.atrtc |
|
dev.atrtc.%parent |
parent class |
dev.atrtc.[num] |
|
dev.atrtc.[num].%desc |
device description |
dev.atrtc.[num].%driver |
device driver name |
dev.atrtc.[num].%iommu |
iommu unit handling the device requests |
dev.atrtc.[num].%location |
device location relative to parent |
dev.atrtc.[num].%parent |
parent device |
dev.atrtc.[num].%pnpinfo |
device identification |
dev.attimer |
|
dev.attimer.%parent |
parent class |
dev.attimer.[num] |
|
dev.attimer.[num].%desc |
device description |
dev.attimer.[num].%driver |
device driver name |
dev.attimer.[num].%iommu |
iommu unit handling the device requests |
dev.attimer.[num].%location |
device location relative to parent |
dev.attimer.[num].%parent |
parent device |
dev.attimer.[num].%pnpinfo |
device identification |
dev.battery |
|
dev.battery.%parent |
parent class |
dev.battery.[num] |
|
dev.battery.[num].%desc |
device description |
dev.battery.[num].%driver |
device driver name |
dev.battery.[num].%iommu |
iommu unit handling the device requests |
dev.battery.[num].%location |
device location relative to parent |
dev.battery.[num].%parent |
parent device |
dev.battery.[num].%pnpinfo |
device identification |
dev.coretemp |
|
dev.coretemp.%parent |
parent class |
dev.coretemp.[num] |
|
dev.coretemp.[num].%desc |
device description |
dev.coretemp.[num].%driver |
device driver name |
dev.coretemp.[num].%iommu |
iommu unit handling the device requests |
dev.coretemp.[num].%location |
device location relative to parent |
dev.coretemp.[num].%parent |
parent device |
dev.coretemp.[num].%pnpinfo |
device identification |
dev.cpu |
|
dev.cpu.%parent |
parent class |
dev.cpu.[num] |
|
dev.cpu.[num].%desc |
device description |
dev.cpu.[num].%driver |
device driver name |
dev.cpu.[num].%iommu |
iommu unit handling the device requests |
dev.cpu.[num].%location |
device location relative to parent |
dev.cpu.[num].%parent |
parent device |
dev.cpu.[num].%pnpinfo |
device identification |
dev.cpu.[num].coretemp |
Per-CPU thermal information |
dev.cpu.[num].coretemp.delta |
Delta between TCC activation and current temperature |
dev.cpu.[num].coretemp.resolution |
Resolution of CPU thermal sensor |
dev.cpu.[num].coretemp.throttle_log |
Set to 1 if the thermal sensor has tripped |
dev.cpu.[num].coretemp.tjmax |
TCC activation temperature |
dev.cpu.[num].cx_lowest |
lowest Cx sleep state to use |
dev.cpu.[num].cx_method |
Cx entrance methods |
dev.cpu.[num].cx_supported |
Cx/microsecond values for supported Cx states |
dev.cpu.[num].cx_usage |
percent usage for each Cx state |
dev.cpu.[num].cx_usage_counters |
Cx sleep state counters |
dev.cpu.[num].freq |
Current CPU frequency |
dev.cpu.[num].freq_levels |
CPU frequency levels |
dev.cpu.[num].temperature |
Current temperature |
dev.cpufreq |
|
dev.cpufreq.%parent |
parent class |
dev.cpufreq.[num] |
|
dev.cpufreq.[num].%desc |
device description |
dev.cpufreq.[num].%driver |
device driver name |
dev.cpufreq.[num].%iommu |
iommu unit handling the device requests |
dev.cpufreq.[num].%location |
device location relative to parent |
dev.cpufreq.[num].%parent |
parent device |
dev.cpufreq.[num].%pnpinfo |
device identification |
dev.cpufreq.[num].freq_driver |
cpufreq driver used by this cpu |
dev.cryptosoft |
|
dev.cryptosoft.%parent |
parent class |
dev.cryptosoft.[num] |
|
dev.cryptosoft.[num].%desc |
device description |
dev.cryptosoft.[num].%driver |
device driver name |
dev.cryptosoft.[num].%iommu |
iommu unit handling the device requests |
dev.cryptosoft.[num].%location |
device location relative to parent |
dev.cryptosoft.[num].%parent |
parent device |
dev.cryptosoft.[num].%pnpinfo |
device identification |
dev.drm |
DRM args (compat) |
dev.drm.[num] |
DRM properties |
dev.drm.[num].PCI_ID |
PCI vendor and device ID |
dev.drm.__drm_debug |
drm debug flags (compat) |
dev.drm.drm_debug_persist |
keep drm debug flags post-load (compat) |
dev.drm.skip_ddb |
go straight to dumping core (compat) |
dev.drmn |
|
dev.drmn.%parent |
parent class |
dev.drmn.[num] |
|
dev.drmn.[num].%desc |
device description |
dev.drmn.[num].%driver |
device driver name |
dev.drmn.[num].%iommu |
iommu unit handling the device requests |
dev.drmn.[num].%location |
device location relative to parent |
dev.drmn.[num].%parent |
parent device |
dev.drmn.[num].%pnpinfo |
device identification |
dev.efirtc |
|
dev.efirtc.%parent |
parent class |
dev.efirtc.[num] |
|
dev.efirtc.[num].%desc |
device description |
dev.efirtc.[num].%driver |
device driver name |
dev.efirtc.[num].%iommu |
iommu unit handling the device requests |
dev.efirtc.[num].%location |
device location relative to parent |
dev.efirtc.[num].%parent |
parent device |
dev.efirtc.[num].%pnpinfo |
device identification |
dev.ehci |
|
dev.ehci.%parent |
parent class |
dev.ehci.[num] |
|
dev.ehci.[num].%desc |
device description |
dev.ehci.[num].%domain |
NUMA domain |
dev.ehci.[num].%driver |
device driver name |
dev.ehci.[num].%iommu |
iommu unit handling the device requests |
dev.ehci.[num].%location |
device location relative to parent |
dev.ehci.[num].%parent |
parent device |
dev.ehci.[num].%pnpinfo |
device identification |
dev.ehci.[num].wake |
Device set to wake the system |
dev.em |
|
dev.em.%parent |
parent class |
dev.em.[num] |
|
dev.em.[num].%desc |
device description |
dev.em.[num].%driver |
device driver name |
dev.em.[num].%iommu |
iommu unit handling the device requests |
dev.em.[num].%location |
device location relative to parent |
dev.em.[num].%parent |
parent device |
dev.em.[num].%pnpinfo |
device identification |
dev.em.[num].debug |
Debug Information |
dev.em.[num].device_control |
Device Control Register |
dev.em.[num].dropped |
Driver dropped packets |
dev.em.[num].eee_control |
Disable Energy Efficient Ethernet |
dev.em.[num].enable_aim |
Interrupt Moderation (1=normal, 2=lowlatency) |
dev.em.[num].fc |
Flow Control |
dev.em.[num].fc_high_water |
Flow Control High Watermark |
dev.em.[num].fc_low_water |
Flow Control Low Watermark |
dev.em.[num].fw_version |
Prints FW/NVM Versions |
dev.em.[num].iflib |
IFLIB fields |
dev.em.[num].iflib.allocated_msix_vectors |
total # of MSI-X vectors allocated by driver |
dev.em.[num].iflib.core_offset |
offset to start using cores at |
dev.em.[num].iflib.disable_msix |
disable MSI-X (default 0) |
dev.em.[num].iflib.driver_version |
driver version |
dev.em.[num].iflib.override_nrxds |
list of # of RX descriptors to use, 0 = use default # |
dev.em.[num].iflib.override_nrxqs |
# of rxqs to use, 0 => use default # |
dev.em.[num].iflib.override_ntxds |
list of # of TX descriptors to use, 0 = use default # |
dev.em.[num].iflib.override_ntxqs |
# of txqs to use, 0 => use default # |
dev.em.[num].iflib.override_qs_enable |
permit #txq != #rxq |
dev.em.[num].iflib.rx_budget |
set the RX budget |
dev.em.[num].iflib.rxq0 |
Queue Name |
dev.em.[num].iflib.rxq0.cpu |
cpu this queue is bound to |
dev.em.[num].iflib.rxq0.rxq_fl0 |
freelist Name |
dev.em.[num].iflib.rxq0.rxq_fl0.buf_size |
buffer size |
dev.em.[num].iflib.rxq0.rxq_fl0.cidx |
Consumer Index |
dev.em.[num].iflib.rxq0.rxq_fl0.credits |
credits available |
dev.em.[num].iflib.rxq0.rxq_fl0.pidx |
Producer Index |
dev.em.[num].iflib.rxq1 |
Queue Name |
dev.em.[num].iflib.rxq1.cpu |
cpu this queue is bound to |
dev.em.[num].iflib.rxq1.rxq_fl0 |
freelist Name |
dev.em.[num].iflib.rxq1.rxq_fl0.buf_size |
buffer size |
dev.em.[num].iflib.rxq1.rxq_fl0.cidx |
Consumer Index |
dev.em.[num].iflib.rxq1.rxq_fl0.credits |
credits available |
dev.em.[num].iflib.rxq1.rxq_fl0.pidx |
Producer Index |
dev.em.[num].iflib.separate_txrx |
use separate cores for TX and RX |
dev.em.[num].iflib.tx_abdicate |
cause TX to abdicate instead of running to completion |
dev.em.[num].iflib.txq0 |
Queue Name |
dev.em.[num].iflib.txq0.cpu |
cpu this queue is bound to |
dev.em.[num].iflib.txq0.m_pullups |
# of times m_pullup was called |
dev.em.[num].iflib.txq0.mbuf_defrag |
# of times m_defrag was called |
dev.em.[num].iflib.txq0.mbuf_defrag_failed |
# of times m_defrag failed |
dev.em.[num].iflib.txq0.no_desc_avail |
# of times no descriptors were available |
dev.em.[num].iflib.txq0.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.em.[num].iflib.txq0.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.em.[num].iflib.txq0.r_drops |
# of drops in the mp_ring for this queue |
dev.em.[num].iflib.txq0.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.em.[num].iflib.txq0.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.em.[num].iflib.txq0.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.em.[num].iflib.txq0.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.em.[num].iflib.txq0.ring_state |
soft ring state |
dev.em.[num].iflib.txq0.tx_map_failed |
# of times DMA map failed |
dev.em.[num].iflib.txq0.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.em.[num].iflib.txq0.txq_cidx |
Consumer Index |
dev.em.[num].iflib.txq0.txq_cidx_processed |
Consumer Index seen by credit update |
dev.em.[num].iflib.txq0.txq_cleaned |
total cleaned |
dev.em.[num].iflib.txq0.txq_in_use |
descriptors in use |
dev.em.[num].iflib.txq0.txq_pidx |
Producer Index |
dev.em.[num].iflib.txq0.txq_processed |
descriptors procesed for clean |
dev.em.[num].iflib.txq1 |
Queue Name |
dev.em.[num].iflib.txq1.cpu |
cpu this queue is bound to |
dev.em.[num].iflib.txq1.m_pullups |
# of times m_pullup was called |
dev.em.[num].iflib.txq1.mbuf_defrag |
# of times m_defrag was called |
dev.em.[num].iflib.txq1.mbuf_defrag_failed |
# of times m_defrag failed |
dev.em.[num].iflib.txq1.no_desc_avail |
# of times no descriptors were available |
dev.em.[num].iflib.txq1.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.em.[num].iflib.txq1.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.em.[num].iflib.txq1.r_drops |
# of drops in the mp_ring for this queue |
dev.em.[num].iflib.txq1.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.em.[num].iflib.txq1.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.em.[num].iflib.txq1.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.em.[num].iflib.txq1.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.em.[num].iflib.txq1.ring_state |
soft ring state |
dev.em.[num].iflib.txq1.tx_map_failed |
# of times DMA map failed |
dev.em.[num].iflib.txq1.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.em.[num].iflib.txq1.txq_cidx |
Consumer Index |
dev.em.[num].iflib.txq1.txq_cidx_processed |
Consumer Index seen by credit update |
dev.em.[num].iflib.txq1.txq_cleaned |
total cleaned |
dev.em.[num].iflib.txq1.txq_in_use |
descriptors in use |
dev.em.[num].iflib.txq1.txq_pidx |
Producer Index |
dev.em.[num].iflib.txq1.txq_processed |
descriptors procesed for clean |
dev.em.[num].iflib.use_extra_msix_vectors |
attempt to reserve the given number of extra MSI-X vectors during driver load for the creation of additional interfaces later |
dev.em.[num].iflib.use_logical_cores |
try to make use of logical cores for TX and RX |
dev.em.[num].interrupts |
Interrupt Statistics |
dev.em.[num].interrupts.asserts |
Interrupt Assertion Count |
dev.em.[num].interrupts.rx_abs_timer |
Interrupt Cause Rx Abs Timer Expire Count |
dev.em.[num].interrupts.rx_desc_min_thresh |
Interrupt Cause Rx Desc Min Thresh Count |
dev.em.[num].interrupts.rx_overrun |
Interrupt Cause Receiver Overrun Count |
dev.em.[num].interrupts.rx_pkt_timer |
Interrupt Cause Rx Pkt Timer Expire Count |
dev.em.[num].interrupts.tx_abs_timer |
Interrupt Cause Tx Abs Timer Expire Count |
dev.em.[num].interrupts.tx_pkt_timer |
Interrupt Cause Tx Pkt Timer Expire Count |
dev.em.[num].interrupts.tx_queue_empty |
Interrupt Cause Tx Queue Empty Count |
dev.em.[num].interrupts.tx_queue_min_thresh |
Interrupt Cause Tx Queue Min Thresh Count |
dev.em.[num].itr |
interrupt delay limit in usecs/4 |
dev.em.[num].link_irq |
Link MSI-X IRQ Handled |
dev.em.[num].mac_stats |
Statistics |
dev.em.[num].mac_stats.alignment_errs |
Alignment Errors |
dev.em.[num].mac_stats.bcast_pkts_recvd |
Broadcast Packets Received |
dev.em.[num].mac_stats.bcast_pkts_txd |
Broadcast Packets Transmitted |
dev.em.[num].mac_stats.coll_ext_errs |
Collision/Carrier extension errors |
dev.em.[num].mac_stats.collision_count |
Collision Count |
dev.em.[num].mac_stats.crc_errs |
CRC errors |
dev.em.[num].mac_stats.defer_count |
Defer Count |
dev.em.[num].mac_stats.excess_coll |
Excessive collisions |
dev.em.[num].mac_stats.good_octets_recvd |
Good Octets Received |
dev.em.[num].mac_stats.good_octets_txd |
Good Octets Transmitted |
dev.em.[num].mac_stats.good_pkts_recvd |
Good Packets Received |
dev.em.[num].mac_stats.good_pkts_txd |
Good Packets Transmitted |
dev.em.[num].mac_stats.late_coll |
Late collisions |
dev.em.[num].mac_stats.mcast_pkts_recvd |
Multicast Packets Received |
dev.em.[num].mac_stats.mcast_pkts_txd |
Multicast Packets Transmitted |
dev.em.[num].mac_stats.mgmt_pkts_drop |
Management Packets Dropped |
dev.em.[num].mac_stats.mgmt_pkts_recvd |
Management Packets Received |
dev.em.[num].mac_stats.mgmt_pkts_txd |
Management Packets Transmitted |
dev.em.[num].mac_stats.missed_packets |
Missed Packets |
dev.em.[num].mac_stats.multiple_coll |
Multiple collisions |
dev.em.[num].mac_stats.recv_errs |
Receive Errors |
dev.em.[num].mac_stats.recv_fragmented |
Fragmented Packets Received |
dev.em.[num].mac_stats.recv_jabber |
Recevied Jabber |
dev.em.[num].mac_stats.recv_length_errors |
Receive Length Errors |
dev.em.[num].mac_stats.recv_no_buff |
Receive No Buffers |
dev.em.[num].mac_stats.recv_oversize |
Oversized Packets Received |
dev.em.[num].mac_stats.recv_undersize |
Receive Undersize |
dev.em.[num].mac_stats.rx_frames_1024_1522 |
1023-1522 byte frames received |
dev.em.[num].mac_stats.rx_frames_128_255 |
128-255 byte frames received |
dev.em.[num].mac_stats.rx_frames_256_511 |
256-511 byte frames received |
dev.em.[num].mac_stats.rx_frames_512_1023 |
512-1023 byte frames received |
dev.em.[num].mac_stats.rx_frames_64 |
64 byte frames received |
dev.em.[num].mac_stats.rx_frames_65_127 |
65-127 byte frames received |
dev.em.[num].mac_stats.sequence_errors |
Sequence Errors |
dev.em.[num].mac_stats.single_coll |
Single collisions |
dev.em.[num].mac_stats.symbol_errors |
Symbol Errors |
dev.em.[num].mac_stats.total_pkts_recvd |
Total Packets Received |
dev.em.[num].mac_stats.total_pkts_txd |
Total Packets Transmitted |
dev.em.[num].mac_stats.tso_ctx_fail |
TSO Contexts Failed |
dev.em.[num].mac_stats.tso_txd |
TSO Contexts Transmitted |
dev.em.[num].mac_stats.tx_frames_1024_1522 |
1024-1522 byte frames transmitted |
dev.em.[num].mac_stats.tx_frames_128_255 |
128-255 byte frames transmitted |
dev.em.[num].mac_stats.tx_frames_256_511 |
256-511 byte frames transmitted |
dev.em.[num].mac_stats.tx_frames_512_1023 |
512-1023 byte frames transmitted |
dev.em.[num].mac_stats.tx_frames_64 |
64 byte frames transmitted |
dev.em.[num].mac_stats.tx_frames_65_127 |
65-127 byte frames transmitted |
dev.em.[num].mac_stats.unsupported_fc_recvd |
Unsupported Flow Control Received |
dev.em.[num].mac_stats.xoff_recvd |
XOFF Received |
dev.em.[num].mac_stats.xoff_txd |
XOFF Transmitted |
dev.em.[num].mac_stats.xon_recvd |
XON Received |
dev.em.[num].mac_stats.xon_txd |
XON Transmitted |
dev.em.[num].nvm |
NVM Information |
dev.em.[num].queue_rx_0 |
RX Queue Name |
dev.em.[num].queue_rx_0.interrupt_rate |
Interrupt Rate |
dev.em.[num].queue_rx_0.rx_irq |
Queue MSI-X Receive Interrupts |
dev.em.[num].queue_rx_0.rxd_head |
Receive Descriptor Head |
dev.em.[num].queue_rx_0.rxd_tail |
Receive Descriptor Tail |
dev.em.[num].queue_rx_1 |
RX Queue Name |
dev.em.[num].queue_rx_1.interrupt_rate |
Interrupt Rate |
dev.em.[num].queue_rx_1.rx_irq |
Queue MSI-X Receive Interrupts |
dev.em.[num].queue_rx_1.rxd_head |
Receive Descriptor Head |
dev.em.[num].queue_rx_1.rxd_tail |
Receive Descriptor Tail |
dev.em.[num].queue_tx_0 |
TX Queue Name |
dev.em.[num].queue_tx_0.interrupt_rate |
Interrupt Rate |
dev.em.[num].queue_tx_0.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.em.[num].queue_tx_0.txd_head |
Transmit Descriptor Head |
dev.em.[num].queue_tx_0.txd_tail |
Transmit Descriptor Tail |
dev.em.[num].queue_tx_1 |
TX Queue Name |
dev.em.[num].queue_tx_1.interrupt_rate |
Interrupt Rate |
dev.em.[num].queue_tx_1.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.em.[num].queue_tx_1.txd_head |
Transmit Descriptor Head |
dev.em.[num].queue_tx_1.txd_tail |
Transmit Descriptor Tail |
dev.em.[num].reg_dump |
Dump Registers |
dev.em.[num].rs_dump |
Dump RS indexes |
dev.em.[num].rx_abs_int_delay |
receive interrupt delay limit in usecs |
dev.em.[num].rx_control |
Receiver Control Register |
dev.em.[num].rx_int_delay |
receive interrupt delay in usecs |
dev.em.[num].rx_overruns |
RX overruns |
dev.em.[num].tx_abs_int_delay |
transmit interrupt delay limit in usecs |
dev.em.[num].tx_int_delay |
transmit interrupt delay in usecs |
dev.em.[num].wake |
Device set to wake the system |
dev.em.[num].watchdog_timeouts |
Watchdog timeouts |
dev.ena |
|
dev.ena.%parent |
parent class |
dev.ena.[num] |
|
dev.ena.[num].%desc |
device description |
dev.ena.[num].%driver |
device driver name |
dev.ena.[num].%iommu |
iommu unit handling the device requests |
dev.ena.[num].%location |
device location relative to parent |
dev.ena.[num].%parent |
parent device |
dev.ena.[num].%pnpinfo |
device identification |
dev.ena.[num].admin_q_pause |
Admin queue pauses |
dev.ena.[num].admin_stats |
ENA Admin Queue statistics |
dev.ena.[num].admin_stats.aborted_cmd |
Aborted commands |
dev.ena.[num].admin_stats.completed_cmd |
Completed commands |
dev.ena.[num].admin_stats.no_completion |
Commands not completed |
dev.ena.[num].admin_stats.out_of_space |
Queue out of space |
dev.ena.[num].admin_stats.sumbitted_cmd |
Submitted commands |
dev.ena.[num].admin_to |
Admin queue timeouts count |
dev.ena.[num].bad_rx_desc_num |
Bad RX descriptors number count |
dev.ena.[num].bad_rx_req_id |
Bad RX req id count |
dev.ena.[num].bad_tx_req_id |
Bad TX req id count |
dev.ena.[num].buf_ring_size |
Size of the Tx buffer ring (drbr). |
dev.ena.[num].customer_metrics |
ENA's customer metrics |
dev.ena.[num].customer_metrics.bw_in_allowance_exceeded |
Inbound BW allowance exceeded |
dev.ena.[num].customer_metrics.bw_out_allowance_exceeded |
Outbound BW allowance exceeded |
dev.ena.[num].customer_metrics.conntrack_allowance_available |
Number of available conntracks |
dev.ena.[num].customer_metrics.conntrack_allowance_exceeded |
Connection tracking allowance exceeded |
dev.ena.[num].customer_metrics.linklocal_allowance_exceeded |
Linklocal packet rate allowance |
dev.ena.[num].customer_metrics.pps_allowance_exceeded |
PPS allowance exceeded |
dev.ena.[num].device_request_reset |
Device reset requests count |
dev.ena.[num].hw_stats |
Statistics from hardware |
dev.ena.[num].hw_stats.rx_bytes |
Bytes received |
dev.ena.[num].hw_stats.rx_drops |
Receive packet drops |
dev.ena.[num].hw_stats.rx_packets |
Packets received |
dev.ena.[num].hw_stats.tx_bytes |
Bytes transmitted |
dev.ena.[num].hw_stats.tx_drops |
Transmit packet drops |
dev.ena.[num].hw_stats.tx_packets |
Packets transmitted |
dev.ena.[num].interface_down |
Network interface down count |
dev.ena.[num].interface_up |
Network interface up count |
dev.ena.[num].invalid_state |
Driver invalid state count |
dev.ena.[num].io_queues_nb |
Number of IO queues. |
dev.ena.[num].irq_affinity |
Decide base CPU and stride for irqs affinity. |
dev.ena.[num].irq_affinity.base_cpu |
Base cpu index for setting irq affinity. |
dev.ena.[num].irq_affinity.cpu_stride |
Distance between irqs when setting affinity. |
dev.ena.[num].keep_alive_timeout |
Timeout for Keep Alive messages |
dev.ena.[num].missing_admin_interrupt |
Missing admin interrupts count |
dev.ena.[num].missing_intr |
Missing interrupt count |
dev.ena.[num].missing_tx_cmpl |
Missing TX completions resets count |
dev.ena.[num].missing_tx_max_queues |
Number of TX queues to check per run |
dev.ena.[num].missing_tx_threshold |
Max number of timeouted packets |
dev.ena.[num].missing_tx_timeout |
Timeout for TX completion |
dev.ena.[num].os_trigger |
OS trigger count |
dev.ena.[num].queue0 |
Queue Name |
dev.ena.[num].queue0.rx_ring |
RX ring |
dev.ena.[num].queue0.rx_ring.bad_desc_num |
Bad descriptor count |
dev.ena.[num].queue0.rx_ring.bad_req_id |
Bad request id count |
dev.ena.[num].queue0.rx_ring.bytes |
Bytes received |
dev.ena.[num].queue0.rx_ring.count |
Packets received |
dev.ena.[num].queue0.rx_ring.csum_bad |
Bad RX checksum |
dev.ena.[num].queue0.rx_ring.csum_good |
Valid RX checksum calculations |
dev.ena.[num].queue0.rx_ring.dma_mapping_err |
DMA mapping errors |
dev.ena.[num].queue0.rx_ring.empty_rx_ring |
RX descriptors depletion count |
dev.ena.[num].queue0.rx_ring.mbuf_alloc_fail |
Failed mbuf allocs |
dev.ena.[num].queue0.rx_ring.mjum_alloc_fail |
Failed jumbo mbuf allocs |
dev.ena.[num].queue0.rx_ring.refil_partial |
Partial refilled mbufs |
dev.ena.[num].queue0.tx_ring |
TX ring |
dev.ena.[num].queue0.tx_ring.bad_req_id |
Bad request id count |
dev.ena.[num].queue0.tx_ring.bytes |
Bytes sent |
dev.ena.[num].queue0.tx_ring.count |
Packets sent |
dev.ena.[num].queue0.tx_ring.dma_mapping_err |
DMA mapping failures |
dev.ena.[num].queue0.tx_ring.doorbells |
Queue doorbells |
dev.ena.[num].queue0.tx_ring.llq_buffer_copy |
Header copies for llq transaction |
dev.ena.[num].queue0.tx_ring.mbuf_collapse_err |
Mbuf collapse failures |
dev.ena.[num].queue0.tx_ring.mbuf_collapses |
Mbuf collapse count |
dev.ena.[num].queue0.tx_ring.missing_tx_comp |
TX completions missed |
dev.ena.[num].queue0.tx_ring.prepare_ctx_err |
TX buffer preparation failures |
dev.ena.[num].queue0.tx_ring.queue_stops |
Queue stops |
dev.ena.[num].queue0.tx_ring.queue_wakeups |
Queue wakeups |
dev.ena.[num].queue0.tx_ring.unmask_interrupt_num |
Unmasked interrupt count |
dev.ena.[num].queue1 |
Queue Name |
dev.ena.[num].queue1.rx_ring |
RX ring |
dev.ena.[num].queue1.rx_ring.bad_desc_num |
Bad descriptor count |
dev.ena.[num].queue1.rx_ring.bad_req_id |
Bad request id count |
dev.ena.[num].queue1.rx_ring.bytes |
Bytes received |
dev.ena.[num].queue1.rx_ring.count |
Packets received |
dev.ena.[num].queue1.rx_ring.csum_bad |
Bad RX checksum |
dev.ena.[num].queue1.rx_ring.csum_good |
Valid RX checksum calculations |
dev.ena.[num].queue1.rx_ring.dma_mapping_err |
DMA mapping errors |
dev.ena.[num].queue1.rx_ring.empty_rx_ring |
RX descriptors depletion count |
dev.ena.[num].queue1.rx_ring.mbuf_alloc_fail |
Failed mbuf allocs |
dev.ena.[num].queue1.rx_ring.mjum_alloc_fail |
Failed jumbo mbuf allocs |
dev.ena.[num].queue1.rx_ring.refil_partial |
Partial refilled mbufs |
dev.ena.[num].queue1.tx_ring |
TX ring |
dev.ena.[num].queue1.tx_ring.bad_req_id |
Bad request id count |
dev.ena.[num].queue1.tx_ring.bytes |
Bytes sent |
dev.ena.[num].queue1.tx_ring.count |
Packets sent |
dev.ena.[num].queue1.tx_ring.dma_mapping_err |
DMA mapping failures |
dev.ena.[num].queue1.tx_ring.doorbells |
Queue doorbells |
dev.ena.[num].queue1.tx_ring.llq_buffer_copy |
Header copies for llq transaction |
dev.ena.[num].queue1.tx_ring.mbuf_collapse_err |
Mbuf collapse failures |
dev.ena.[num].queue1.tx_ring.mbuf_collapses |
Mbuf collapse count |
dev.ena.[num].queue1.tx_ring.missing_tx_comp |
TX completions missed |
dev.ena.[num].queue1.tx_ring.prepare_ctx_err |
TX buffer preparation failures |
dev.ena.[num].queue1.tx_ring.queue_stops |
Queue stops |
dev.ena.[num].queue1.tx_ring.queue_wakeups |
Queue wakeups |
dev.ena.[num].queue1.tx_ring.unmask_interrupt_num |
Unmasked interrupt count |
dev.ena.[num].rss |
Receive Side Scaling options. |
dev.ena.[num].rss.indir_table |
RSS indirection table. |
dev.ena.[num].rss.indir_table_size |
RSS indirection table size. |
dev.ena.[num].rss.key |
RSS key. |
dev.ena.[num].rx_desc_malformed |
RX descriptors malformed count |
dev.ena.[num].rx_queue_size |
Size of the Rx ring. The size should be a power of 2. |
dev.ena.[num].stats_sample_interval |
Interval in seconds for updating Netword interface metrics. 0 turns off the update. |
dev.ena.[num].total_resets |
Total resets count |
dev.ena.[num].tx_desc_malformed |
TX descriptors malformed count |
dev.ena.[num].wd_active |
Watchdog is active |
dev.ena.[num].wd_expired |
Watchdog expiry count |
dev.est |
|
dev.est.%parent |
parent class |
dev.est.[num] |
|
dev.est.[num].%desc |
device description |
dev.est.[num].%driver |
device driver name |
dev.est.[num].%iommu |
iommu unit handling the device requests |
dev.est.[num].%location |
device location relative to parent |
dev.est.[num].%parent |
parent device |
dev.est.[num].%pnpinfo |
device identification |
dev.est.[num].freq_settings |
CPU frequency driver settings |
dev.fpupnp |
|
dev.fpupnp.%parent |
parent class |
dev.fpupnp.[num] |
|
dev.fpupnp.[num].%desc |
device description |
dev.fpupnp.[num].%driver |
device driver name |
dev.fpupnp.[num].%iommu |
iommu unit handling the device requests |
dev.fpupnp.[num].%location |
device location relative to parent |
dev.fpupnp.[num].%parent |
parent device |
dev.fpupnp.[num].%pnpinfo |
device identification |
dev.hconf |
|
dev.hconf.%parent |
parent class |
dev.hconf.[num] |
|
dev.hconf.[num].%desc |
device description |
dev.hconf.[num].%driver |
device driver name |
dev.hconf.[num].%iommu |
iommu unit handling the device requests |
dev.hconf.[num].%location |
device location relative to parent |
dev.hconf.[num].%parent |
parent device |
dev.hconf.[num].%pnpinfo |
device identification |
dev.hconf.[num].buttons_switch |
Enable / disable switch for buttons: 1 = on, 0 = off |
dev.hconf.[num].input_mode |
HID device input mode: 0 = mouse, 3 = touchpad |
dev.hconf.[num].surface_switch |
Enable / disable switch for surface: 1 = on, 0 = off |
dev.hdaa |
|
dev.hdaa.%parent |
parent class |
dev.hdaa.[num] |
|
dev.hdaa.[num].%desc |
device description |
dev.hdaa.[num].%driver |
device driver name |
dev.hdaa.[num].%iommu |
iommu unit handling the device requests |
dev.hdaa.[num].%location |
device location relative to parent |
dev.hdaa.[num].%parent |
parent device |
dev.hdaa.[num].%pnpinfo |
device identification |
dev.hdaa.[num].config |
Configuration options |
dev.hdaa.[num].gpi_state |
GPI state |
dev.hdaa.[num].gpio_config |
GPIO configuration |
dev.hdaa.[num].gpio_state |
GPIO state |
dev.hdaa.[num].gpo_config |
GPO configuration |
dev.hdaa.[num].gpo_state |
GPO state |
dev.hdaa.[num].init_clear |
Clear initial pin widget configuration |
dev.hdaa.[num].nid10 |
Node capabilities |
dev.hdaa.[num].nid11 |
Node capabilities |
dev.hdaa.[num].nid11_config |
Current pin configuration |
dev.hdaa.[num].nid11_original |
Original pin configuration |
dev.hdaa.[num].nid12 |
Node capabilities |
dev.hdaa.[num].nid13 |
Node capabilities |
dev.hdaa.[num].nid13_config |
Current pin configuration |
dev.hdaa.[num].nid13_original |
Original pin configuration |
dev.hdaa.[num].nid14 |
Node capabilities |
dev.hdaa.[num].nid15 |
Node capabilities |
dev.hdaa.[num].nid16 |
Node capabilities |
dev.hdaa.[num].nid17 |
Node capabilities |
dev.hdaa.[num].nid17_config |
Current pin configuration |
dev.hdaa.[num].nid17_original |
Original pin configuration |
dev.hdaa.[num].nid18 |
Node capabilities |
dev.hdaa.[num].nid18_config |
Current pin configuration |
dev.hdaa.[num].nid18_original |
Original pin configuration |
dev.hdaa.[num].nid19 |
Node capabilities |
dev.hdaa.[num].nid19_config |
Current pin configuration |
dev.hdaa.[num].nid19_original |
Original pin configuration |
dev.hdaa.[num].nid2 |
Node capabilities |
dev.hdaa.[num].nid20 |
Node capabilities |
dev.hdaa.[num].nid20_config |
Current pin configuration |
dev.hdaa.[num].nid20_original |
Original pin configuration |
dev.hdaa.[num].nid21 |
Node capabilities |
dev.hdaa.[num].nid21_config |
Current pin configuration |
dev.hdaa.[num].nid21_original |
Original pin configuration |
dev.hdaa.[num].nid22 |
Node capabilities |
dev.hdaa.[num].nid22_config |
Current pin configuration |
dev.hdaa.[num].nid22_original |
Original pin configuration |
dev.hdaa.[num].nid23 |
Node capabilities |
dev.hdaa.[num].nid23_config |
Current pin configuration |
dev.hdaa.[num].nid23_original |
Original pin configuration |
dev.hdaa.[num].nid24 |
Node capabilities |
dev.hdaa.[num].nid24_config |
Current pin configuration |
dev.hdaa.[num].nid24_original |
Original pin configuration |
dev.hdaa.[num].nid25 |
Node capabilities |
dev.hdaa.[num].nid25_config |
Current pin configuration |
dev.hdaa.[num].nid25_original |
Original pin configuration |
dev.hdaa.[num].nid26 |
Node capabilities |
dev.hdaa.[num].nid26_config |
Current pin configuration |
dev.hdaa.[num].nid26_original |
Original pin configuration |
dev.hdaa.[num].nid27 |
Node capabilities |
dev.hdaa.[num].nid27_config |
Current pin configuration |
dev.hdaa.[num].nid27_original |
Original pin configuration |
dev.hdaa.[num].nid28 |
Node capabilities |
dev.hdaa.[num].nid28_config |
Current pin configuration |
dev.hdaa.[num].nid28_original |
Original pin configuration |
dev.hdaa.[num].nid29 |
Node capabilities |
dev.hdaa.[num].nid29_config |
Current pin configuration |
dev.hdaa.[num].nid29_original |
Original pin configuration |
dev.hdaa.[num].nid3 |
Node capabilities |
dev.hdaa.[num].nid30 |
Node capabilities |
dev.hdaa.[num].nid30_config |
Current pin configuration |
dev.hdaa.[num].nid30_original |
Original pin configuration |
dev.hdaa.[num].nid31 |
Node capabilities |
dev.hdaa.[num].nid31_config |
Current pin configuration |
dev.hdaa.[num].nid31_original |
Original pin configuration |
dev.hdaa.[num].nid32 |
Node capabilities |
dev.hdaa.[num].nid33 |
Node capabilities |
dev.hdaa.[num].nid33_config |
Current pin configuration |
dev.hdaa.[num].nid33_original |
Original pin configuration |
dev.hdaa.[num].nid34 |
Node capabilities |
dev.hdaa.[num].nid35 |
Node capabilities |
dev.hdaa.[num].nid36 |
Node capabilities |
dev.hdaa.[num].nid37 |
Node capabilities |
dev.hdaa.[num].nid38 |
Node capabilities |
dev.hdaa.[num].nid3_config |
Current pin configuration |
dev.hdaa.[num].nid3_original |
Original pin configuration |
dev.hdaa.[num].nid4 |
Node capabilities |
dev.hdaa.[num].nid4_config |
Current pin configuration |
dev.hdaa.[num].nid4_original |
Original pin configuration |
dev.hdaa.[num].nid5 |
Node capabilities |
dev.hdaa.[num].nid5_config |
Current pin configuration |
dev.hdaa.[num].nid5_original |
Original pin configuration |
dev.hdaa.[num].nid6 |
Node capabilities |
dev.hdaa.[num].nid6_config |
Current pin configuration |
dev.hdaa.[num].nid6_original |
Original pin configuration |
dev.hdaa.[num].nid7 |
Node capabilities |
dev.hdaa.[num].nid7_config |
Current pin configuration |
dev.hdaa.[num].nid7_original |
Original pin configuration |
dev.hdaa.[num].nid8 |
Node capabilities |
dev.hdaa.[num].nid9 |
Node capabilities |
dev.hdaa.[num].nid9_config |
Current pin configuration |
dev.hdaa.[num].nid9_original |
Original pin configuration |
dev.hdaa.[num].reconfig |
Reprocess configuration |
dev.hdac |
|
dev.hdac.%parent |
parent class |
dev.hdac.[num] |
|
dev.hdac.[num].%desc |
device description |
dev.hdac.[num].%driver |
device driver name |
dev.hdac.[num].%iommu |
iommu unit handling the device requests |
dev.hdac.[num].%location |
device location relative to parent |
dev.hdac.[num].%parent |
parent device |
dev.hdac.[num].%pnpinfo |
device identification |
dev.hdac.[num].pindump |
Dump pin states/data |
dev.hdac.[num].polling |
Enable polling mode |
dev.hdac.[num].wake |
Device set to wake the system |
dev.hdacc |
|
dev.hdacc.%parent |
parent class |
dev.hdacc.[num] |
|
dev.hdacc.[num].%desc |
device description |
dev.hdacc.[num].%driver |
device driver name |
dev.hdacc.[num].%iommu |
iommu unit handling the device requests |
dev.hdacc.[num].%location |
device location relative to parent |
dev.hdacc.[num].%parent |
parent device |
dev.hdacc.[num].%pnpinfo |
device identification |
dev.hidbus |
|
dev.hidbus.%parent |
parent class |
dev.hidbus.[num] |
|
dev.hidbus.[num].%desc |
device description |
dev.hidbus.[num].%driver |
device driver name |
dev.hidbus.[num].%iommu |
iommu unit handling the device requests |
dev.hidbus.[num].%location |
device location relative to parent |
dev.hidbus.[num].%parent |
parent device |
dev.hidbus.[num].%pnpinfo |
device identification |
dev.hkbd |
|
dev.hkbd.%parent |
parent class |
dev.hkbd.[num] |
|
dev.hkbd.[num].%desc |
device description |
dev.hkbd.[num].%driver |
device driver name |
dev.hkbd.[num].%iommu |
iommu unit handling the device requests |
dev.hkbd.[num].%location |
device location relative to parent |
dev.hkbd.[num].%parent |
parent device |
dev.hkbd.[num].%pnpinfo |
device identification |
dev.hms |
|
dev.hms.%parent |
parent class |
dev.hms.[num] |
|
dev.hms.[num].%desc |
device description |
dev.hms.[num].%driver |
device driver name |
dev.hms.[num].%iommu |
iommu unit handling the device requests |
dev.hms.[num].%location |
device location relative to parent |
dev.hms.[num].%parent |
parent device |
dev.hms.[num].%pnpinfo |
device identification |
dev.hms.[num].debug |
Verbosity level |
dev.hms.[num].drift_thresh |
drift detection threshold |
dev.hmt |
|
dev.hmt.%parent |
parent class |
dev.hmt.[num] |
|
dev.hmt.[num].%desc |
device description |
dev.hmt.[num].%driver |
device driver name |
dev.hmt.[num].%iommu |
iommu unit handling the device requests |
dev.hmt.[num].%location |
device location relative to parent |
dev.hmt.[num].%parent |
parent device |
dev.hmt.[num].%pnpinfo |
device identification |
dev.hn |
|
dev.hn.%parent |
parent class |
dev.hn.[num] |
|
dev.hn.[num].%desc |
device description |
dev.hn.[num].%driver |
device driver name |
dev.hn.[num].%iommu |
iommu unit handling the device requests |
dev.hn.[num].%location |
device location relative to parent |
dev.hn.[num].%parent |
parent device |
dev.hn.[num].%pnpinfo |
device identification |
dev.hn.[num].agg_align |
Applied packet transmission aggregation alignment |
dev.hn.[num].agg_flush_failed |
# of packet transmission aggregation flush failure |
dev.hn.[num].agg_pktmax |
Applied packet transmission aggregation packets |
dev.hn.[num].agg_pkts |
Packet transmission aggregation packets, 0 -- disable, -1 -- auto |
dev.hn.[num].agg_size |
Packet transmission aggregation size, 0 -- disable, -1 -- auto |
dev.hn.[num].agg_szmax |
Applied packet transmission aggregation size |
dev.hn.[num].caps |
capabilities |
dev.hn.[num].channel |
|
dev.hn.[num].channel.[num] |
|
dev.hn.[num].channel.[num].br |
|
dev.hn.[num].channel.[num].br.rx |
|
dev.hn.[num].channel.[num].br.rx.state |
rx state |
dev.hn.[num].channel.[num].br.rx.state_bin |
rx binary state |
dev.hn.[num].channel.[num].br.tx |
|
dev.hn.[num].channel.[num].br.tx.state |
tx state |
dev.hn.[num].channel.[num].br.tx.state_bin |
tx binary state |
dev.hn.[num].channel.[num].cpu |
owner CPU id |
dev.hn.[num].channel.[num].mnf |
has monitor notification facilities |
dev.hn.[num].channel.[num].sub |
|
dev.hn.[num].channel.[num].sub.[num] |
|
dev.hn.[num].channel.[num].sub.[num].br |
|
dev.hn.[num].channel.[num].sub.[num].br.rx |
|
dev.hn.[num].channel.[num].sub.[num].br.rx.state |
rx state |
dev.hn.[num].channel.[num].sub.[num].br.rx.state_bin |
rx binary state |
dev.hn.[num].channel.[num].sub.[num].br.tx |
|
dev.hn.[num].channel.[num].sub.[num].br.tx.state |
tx state |
dev.hn.[num].channel.[num].sub.[num].br.tx.state_bin |
tx binary state |
dev.hn.[num].channel.[num].sub.[num].chanid |
channel id |
dev.hn.[num].channel.[num].sub.[num].cpu |
owner CPU id |
dev.hn.[num].channel.[num].sub.[num].mnf |
has monitor notification facilities |
dev.hn.[num].csum_ip |
RXCSUM IP |
dev.hn.[num].csum_tcp |
RXCSUM TCP |
dev.hn.[num].csum_trusted |
# of packets that we trust host's csum verification |
dev.hn.[num].csum_udp |
RXCSUM UDP |
dev.hn.[num].direct_tx_size |
Size of the packet for direct transmission |
dev.hn.[num].hwassist |
hwassist |
dev.hn.[num].lro_ackcnt_lim |
Max # of ACKs to be aggregated by LRO |
dev.hn.[num].lro_flushed |
LRO flushed |
dev.hn.[num].lro_length_lim |
Max # of data bytes to be aggregated by LRO |
dev.hn.[num].lro_queued |
LRO queued |
dev.hn.[num].lro_tried |
# of LRO tries |
dev.hn.[num].mbuf_hash |
RSS hash for mbufs |
dev.hn.[num].ndis_version |
NDIS version |
dev.hn.[num].no_txdescs |
# of times short of TX descs |
dev.hn.[num].nvs_version |
NVS version |
dev.hn.[num].polling |
Polling frequency: [100,1000000], 0 disable polling |
dev.hn.[num].rndis_agg_align |
RNDIS packet transmission aggregation alignment |
dev.hn.[num].rndis_agg_pkts |
RNDIS offered packet transmission aggregation count limit |
dev.hn.[num].rndis_agg_size |
RNDIS offered packet transmission aggregation size limit |
dev.hn.[num].rsc_switch |
switch to rsc |
dev.hn.[num].rss_hash |
RSS hash |
dev.hn.[num].rss_hashcap |
RSS hash capabilities |
dev.hn.[num].rss_ind |
RSS indirect table |
dev.hn.[num].rss_ind_size |
RSS indirect entry count |
dev.hn.[num].rss_key |
RSS key |
dev.hn.[num].rx |
|
dev.hn.[num].rx.[num] |
|
dev.hn.[num].rx.[num].packets |
# of packets received |
dev.hn.[num].rx.[num].pktbuf_len |
Temporary channel packet buffer length |
dev.hn.[num].rx.[num].rsc_drop |
# of RSC fragments dropped |
dev.hn.[num].rx.[num].rsc_pkts |
# of RSC packets received |
dev.hn.[num].rx.[num].rss_pkts |
# of packets w/ RSS info received |
dev.hn.[num].rx_ack_failed |
# of RXBUF ack failures |
dev.hn.[num].rx_ring_cnt |
# created RX rings |
dev.hn.[num].rx_ring_inuse |
# used RX rings |
dev.hn.[num].rxfilter |
rxfilter |
dev.hn.[num].sched_tx |
Always schedule transmission instead of doing direct transmission |
dev.hn.[num].send_failed |
# of hyper-v sending failure |
dev.hn.[num].small_pkts |
# of small packets received |
dev.hn.[num].trust_hostip |
Trust ip packet verification on host side, when csum info is missing |
dev.hn.[num].trust_hosttcp |
Trust tcp segment verification on host side, when csum info is missing |
dev.hn.[num].trust_hostudp |
Trust udp datagram verification on host side, when csum info is missing |
dev.hn.[num].tso_max |
max TSO size |
dev.hn.[num].tso_maxsegcnt |
max # of TSO segments |
dev.hn.[num].tso_maxsegsz |
max size of TSO segment |
dev.hn.[num].tx |
|
dev.hn.[num].tx.[num] |
|
dev.hn.[num].tx.[num].oactive |
over active |
dev.hn.[num].tx.[num].packets |
# of packets transmitted |
dev.hn.[num].tx.[num].sends |
# of sends |
dev.hn.[num].tx_chimney |
# of chimney send |
dev.hn.[num].tx_chimney_max |
Chimney send packet size upper boundary |
dev.hn.[num].tx_chimney_size |
Chimney send packet size limit |
dev.hn.[num].tx_chimney_tried |
# of chimney send tries |
dev.hn.[num].tx_collapsed |
# of TX mbuf collapsed |
dev.hn.[num].tx_ring_cnt |
# created TX rings |
dev.hn.[num].tx_ring_inuse |
# used TX rings |
dev.hn.[num].txdesc_cnt |
# of total TX descs |
dev.hn.[num].txdma_failed |
# of TX DMA failure |
dev.hn.[num].vf |
Virtual Function's name |
dev.hn.[num].vf_xpnt_accbpf |
Accurate BPF for transparent VF |
dev.hn.[num].vf_xpnt_enabled |
Transparent VF enabled |
dev.hostb |
|
dev.hostb.%parent |
parent class |
dev.hostb.[num] |
|
dev.hostb.[num].%desc |
device description |
dev.hostb.[num].%domain |
NUMA domain |
dev.hostb.[num].%driver |
device driver name |
dev.hostb.[num].%iommu |
iommu unit handling the device requests |
dev.hostb.[num].%location |
device location relative to parent |
dev.hostb.[num].%parent |
parent device |
dev.hostb.[num].%pnpinfo |
device identification |
dev.hpet |
|
dev.hpet.%parent |
parent class |
dev.hpet.[num] |
|
dev.hpet.[num].%desc |
device description |
dev.hpet.[num].%driver |
device driver name |
dev.hpet.[num].%iommu |
iommu unit handling the device requests |
dev.hpet.[num].%location |
device location relative to parent |
dev.hpet.[num].%parent |
parent device |
dev.hpet.[num].%pnpinfo |
device identification |
dev.hpet.[num].mmap_allow |
Allow userland to memory map HPET |
dev.hpet.[num].mmap_allow_write |
Allow userland write to the HPET register space |
dev.hvet |
|
dev.hvet.%parent |
parent class |
dev.hvet.[num] |
|
dev.hvet.[num].%desc |
device description |
dev.hvet.[num].%driver |
device driver name |
dev.hvet.[num].%iommu |
iommu unit handling the device requests |
dev.hvet.[num].%location |
device location relative to parent |
dev.hvet.[num].%parent |
parent device |
dev.hvet.[num].%pnpinfo |
device identification |
dev.hvheartbeat |
|
dev.hvheartbeat.%parent |
parent class |
dev.hvheartbeat.[num] |
|
dev.hvheartbeat.[num].%desc |
device description |
dev.hvheartbeat.[num].%driver |
device driver name |
dev.hvheartbeat.[num].%iommu |
iommu unit handling the device requests |
dev.hvheartbeat.[num].%location |
device location relative to parent |
dev.hvheartbeat.[num].%parent |
parent device |
dev.hvheartbeat.[num].%pnpinfo |
device identification |
dev.hvheartbeat.[num].channel |
|
dev.hvheartbeat.[num].channel.[num] |
|
dev.hvheartbeat.[num].channel.[num].br |
|
dev.hvheartbeat.[num].channel.[num].br.rx |
|
dev.hvheartbeat.[num].channel.[num].br.rx.state |
rx state |
dev.hvheartbeat.[num].channel.[num].br.rx.state_bin |
rx binary state |
dev.hvheartbeat.[num].channel.[num].br.tx |
|
dev.hvheartbeat.[num].channel.[num].br.tx.state |
tx state |
dev.hvheartbeat.[num].channel.[num].br.tx.state_bin |
tx binary state |
dev.hvheartbeat.[num].channel.[num].cpu |
owner CPU id |
dev.hvheartbeat.[num].channel.[num].mnf |
has monitor notification facilities |
dev.hvheartbeat.[num].fw_version |
framework version |
dev.hvheartbeat.[num].msg_version |
message version |
dev.hvhid |
|
dev.hvhid.%parent |
parent class |
dev.hvhid.[num] |
|
dev.hvhid.[num].%desc |
device description |
dev.hvhid.[num].%driver |
device driver name |
dev.hvhid.[num].%iommu |
iommu unit handling the device requests |
dev.hvhid.[num].%location |
device location relative to parent |
dev.hvhid.[num].%parent |
parent device |
dev.hvhid.[num].%pnpinfo |
device identification |
dev.hvhid.[num].channel |
|
dev.hvhid.[num].channel.[num] |
|
dev.hvhid.[num].channel.[num].br |
|
dev.hvhid.[num].channel.[num].br.rx |
|
dev.hvhid.[num].channel.[num].br.rx.state |
rx state |
dev.hvhid.[num].channel.[num].br.rx.state_bin |
rx binary state |
dev.hvhid.[num].channel.[num].br.tx |
|
dev.hvhid.[num].channel.[num].br.tx.state |
tx state |
dev.hvhid.[num].channel.[num].br.tx.state_bin |
tx binary state |
dev.hvhid.[num].channel.[num].cpu |
owner CPU id |
dev.hvhid.[num].channel.[num].mnf |
has monitor notification facilities |
dev.hvkbd |
|
dev.hvkbd.%parent |
parent class |
dev.hvkbd.[num] |
|
dev.hvkbd.[num].%desc |
device description |
dev.hvkbd.[num].%driver |
device driver name |
dev.hvkbd.[num].%iommu |
iommu unit handling the device requests |
dev.hvkbd.[num].%location |
device location relative to parent |
dev.hvkbd.[num].%parent |
parent device |
dev.hvkbd.[num].%pnpinfo |
device identification |
dev.hvkbd.[num].channel |
|
dev.hvkbd.[num].channel.[num] |
|
dev.hvkbd.[num].channel.[num].br |
|
dev.hvkbd.[num].channel.[num].br.rx |
|
dev.hvkbd.[num].channel.[num].br.rx.state |
rx state |
dev.hvkbd.[num].channel.[num].br.rx.state_bin |
rx binary state |
dev.hvkbd.[num].channel.[num].br.tx |
|
dev.hvkbd.[num].channel.[num].br.tx.state |
tx state |
dev.hvkbd.[num].channel.[num].br.tx.state_bin |
tx binary state |
dev.hvkbd.[num].channel.[num].cpu |
owner CPU id |
dev.hvkbd.[num].channel.[num].mnf |
has monitor notification facilities |
dev.hvkbd.[num].debug |
debug hyperv keyboard |
dev.hvkvp |
|
dev.hvkvp.%parent |
parent class |
dev.hvkvp.[num] |
|
dev.hvkvp.[num].%desc |
device description |
dev.hvkvp.[num].%driver |
device driver name |
dev.hvkvp.[num].%iommu |
iommu unit handling the device requests |
dev.hvkvp.[num].%location |
device location relative to parent |
dev.hvkvp.[num].%parent |
parent device |
dev.hvkvp.[num].%pnpinfo |
device identification |
dev.hvkvp.[num].channel |
|
dev.hvkvp.[num].channel.[num] |
|
dev.hvkvp.[num].channel.[num].br |
|
dev.hvkvp.[num].channel.[num].br.rx |
|
dev.hvkvp.[num].channel.[num].br.rx.state |
rx state |
dev.hvkvp.[num].channel.[num].br.rx.state_bin |
rx binary state |
dev.hvkvp.[num].channel.[num].br.tx |
|
dev.hvkvp.[num].channel.[num].br.tx.state |
tx state |
dev.hvkvp.[num].channel.[num].br.tx.state_bin |
tx binary state |
dev.hvkvp.[num].channel.[num].cpu |
owner CPU id |
dev.hvkvp.[num].channel.[num].mnf |
has monitor notification facilities |
dev.hvkvp.[num].fw_version |
framework version |
dev.hvkvp.[num].hv_kvp_log |
Hyperv KVP service log level |
dev.hvkvp.[num].msg_version |
message version |
dev.hvshutdown |
|
dev.hvshutdown.%parent |
parent class |
dev.hvshutdown.[num] |
|
dev.hvshutdown.[num].%desc |
device description |
dev.hvshutdown.[num].%driver |
device driver name |
dev.hvshutdown.[num].%iommu |
iommu unit handling the device requests |
dev.hvshutdown.[num].%location |
device location relative to parent |
dev.hvshutdown.[num].%parent |
parent device |
dev.hvshutdown.[num].%pnpinfo |
device identification |
dev.hvshutdown.[num].channel |
|
dev.hvshutdown.[num].channel.[num] |
|
dev.hvshutdown.[num].channel.[num].br |
|
dev.hvshutdown.[num].channel.[num].br.rx |
|
dev.hvshutdown.[num].channel.[num].br.rx.state |
rx state |
dev.hvshutdown.[num].channel.[num].br.rx.state_bin |
rx binary state |
dev.hvshutdown.[num].channel.[num].br.tx |
|
dev.hvshutdown.[num].channel.[num].br.tx.state |
tx state |
dev.hvshutdown.[num].channel.[num].br.tx.state_bin |
tx binary state |
dev.hvshutdown.[num].channel.[num].cpu |
owner CPU id |
dev.hvshutdown.[num].channel.[num].mnf |
has monitor notification facilities |
dev.hvshutdown.[num].fw_version |
framework version |
dev.hvshutdown.[num].msg_version |
message version |
dev.hvtimesync |
|
dev.hvtimesync.%parent |
parent class |
dev.hvtimesync.[num] |
|
dev.hvtimesync.[num].%desc |
device description |
dev.hvtimesync.[num].%driver |
device driver name |
dev.hvtimesync.[num].%iommu |
iommu unit handling the device requests |
dev.hvtimesync.[num].%location |
device location relative to parent |
dev.hvtimesync.[num].%parent |
parent device |
dev.hvtimesync.[num].%pnpinfo |
device identification |
dev.hvtimesync.[num].channel |
|
dev.hvtimesync.[num].channel.[num] |
|
dev.hvtimesync.[num].channel.[num].br |
|
dev.hvtimesync.[num].channel.[num].br.rx |
|
dev.hvtimesync.[num].channel.[num].br.rx.state |
rx state |
dev.hvtimesync.[num].channel.[num].br.rx.state_bin |
rx binary state |
dev.hvtimesync.[num].channel.[num].br.tx |
|
dev.hvtimesync.[num].channel.[num].br.tx.state |
tx state |
dev.hvtimesync.[num].channel.[num].br.tx.state_bin |
tx binary state |
dev.hvtimesync.[num].channel.[num].cpu |
owner CPU id |
dev.hvtimesync.[num].channel.[num].mnf |
has monitor notification facilities |
dev.hvtimesync.[num].fw_version |
framework version |
dev.hvtimesync.[num].msg_version |
message version |
dev.hwpstate |
|
dev.hwpstate.%parent |
parent class |
dev.hwpstate.[num] |
|
dev.hwpstate.[num].%desc |
device description |
dev.hwpstate.[num].%driver |
device driver name |
dev.hwpstate.[num].%iommu |
iommu unit handling the device requests |
dev.hwpstate.[num].%location |
device location relative to parent |
dev.hwpstate.[num].%parent |
parent device |
dev.hwpstate.[num].%pnpinfo |
device identification |
dev.hwpstate.[num].freq_settings |
CPU frequency driver settings |
dev.hwpstate_intel |
|
dev.hwpstate_intel.%parent |
parent class |
dev.hwpstate_intel.[num] |
|
dev.hwpstate_intel.[num].%desc |
device description |
dev.hwpstate_intel.[num].%driver |
device driver name |
dev.hwpstate_intel.[num].%iommu |
iommu unit handling the device requests |
dev.hwpstate_intel.[num].%location |
device location relative to parent |
dev.hwpstate_intel.[num].%parent |
parent device |
dev.hwpstate_intel.[num].%pnpinfo |
device identification |
dev.hwpstate_intel.[num].epp |
Efficiency/Performance Preference (range from 0, most performant, through 100, most efficient) |
dev.hwpstate_intel.[num].freq_settings |
CPU frequency driver settings |
dev.ichsmb |
|
dev.ichsmb.%parent |
parent class |
dev.ichsmb.[num] |
|
dev.ichsmb.[num].%desc |
device description |
dev.ichsmb.[num].%domain |
NUMA domain |
dev.ichsmb.[num].%driver |
device driver name |
dev.ichsmb.[num].%iommu |
iommu unit handling the device requests |
dev.ichsmb.[num].%location |
device location relative to parent |
dev.ichsmb.[num].%parent |
parent device |
dev.ichsmb.[num].%pnpinfo |
device identification |
dev.ig4iic |
|
dev.ig4iic.%parent |
parent class |
dev.ig4iic.[num] |
|
dev.ig4iic.[num].%desc |
device description |
dev.ig4iic.[num].%driver |
device driver name |
dev.ig4iic.[num].%iommu |
iommu unit handling the device requests |
dev.ig4iic.[num].%location |
device location relative to parent |
dev.ig4iic.[num].%parent |
parent device |
dev.ig4iic.[num].%pnpinfo |
device identification |
dev.igb |
|
dev.igb.%parent |
parent class |
dev.igb.[num] |
|
dev.igb.[num].%desc |
device description |
dev.igb.[num].%domain |
NUMA domain |
dev.igb.[num].%driver |
device driver name |
dev.igb.[num].%iommu |
iommu unit handling the device requests |
dev.igb.[num].%location |
device location relative to parent |
dev.igb.[num].%parent |
parent device |
dev.igb.[num].%pnpinfo |
device identification |
dev.igb.[num].debug |
Debug Information |
dev.igb.[num].device_control |
Device Control Register |
dev.igb.[num].dmac |
DMA Coalesce |
dev.igb.[num].dropped |
Driver dropped packets |
dev.igb.[num].eee_control |
Disable Energy Efficient Ethernet |
dev.igb.[num].enable_aim |
Interrupt Moderation (1=normal, 2=lowlatency) |
dev.igb.[num].fc |
Flow Control |
dev.igb.[num].fc_high_water |
Flow Control High Watermark |
dev.igb.[num].fc_low_water |
Flow Control Low Watermark |
dev.igb.[num].fw_version |
Prints FW/NVM Versions |
dev.igb.[num].iflib |
IFLIB fields |
dev.igb.[num].iflib.allocated_msix_vectors |
total # of MSI-X vectors allocated by driver |
dev.igb.[num].iflib.core_offset |
offset to start using cores at |
dev.igb.[num].iflib.disable_msix |
disable MSI-X (default 0) |
dev.igb.[num].iflib.driver_version |
driver version |
dev.igb.[num].iflib.override_nrxds |
list of # of RX descriptors to use, 0 = use default # |
dev.igb.[num].iflib.override_nrxqs |
# of rxqs to use, 0 => use default # |
dev.igb.[num].iflib.override_ntxds |
list of # of TX descriptors to use, 0 = use default # |
dev.igb.[num].iflib.override_ntxqs |
# of txqs to use, 0 => use default # |
dev.igb.[num].iflib.override_qs_enable |
permit #txq != #rxq |
dev.igb.[num].iflib.rx_budget |
set the RX budget |
dev.igb.[num].iflib.rxq0 |
Queue Name |
dev.igb.[num].iflib.rxq0.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.rxq0.rxq_fl0 |
freelist Name |
dev.igb.[num].iflib.rxq0.rxq_fl0.buf_size |
buffer size |
dev.igb.[num].iflib.rxq0.rxq_fl0.cidx |
Consumer Index |
dev.igb.[num].iflib.rxq0.rxq_fl0.credits |
credits available |
dev.igb.[num].iflib.rxq0.rxq_fl0.pidx |
Producer Index |
dev.igb.[num].iflib.rxq1 |
Queue Name |
dev.igb.[num].iflib.rxq1.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.rxq1.rxq_fl0 |
freelist Name |
dev.igb.[num].iflib.rxq1.rxq_fl0.buf_size |
buffer size |
dev.igb.[num].iflib.rxq1.rxq_fl0.cidx |
Consumer Index |
dev.igb.[num].iflib.rxq1.rxq_fl0.credits |
credits available |
dev.igb.[num].iflib.rxq1.rxq_fl0.pidx |
Producer Index |
dev.igb.[num].iflib.rxq2 |
Queue Name |
dev.igb.[num].iflib.rxq2.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.rxq2.rxq_fl0 |
freelist Name |
dev.igb.[num].iflib.rxq2.rxq_fl0.buf_size |
buffer size |
dev.igb.[num].iflib.rxq2.rxq_fl0.cidx |
Consumer Index |
dev.igb.[num].iflib.rxq2.rxq_fl0.credits |
credits available |
dev.igb.[num].iflib.rxq2.rxq_fl0.pidx |
Producer Index |
dev.igb.[num].iflib.rxq3 |
Queue Name |
dev.igb.[num].iflib.rxq3.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.rxq3.rxq_fl0 |
freelist Name |
dev.igb.[num].iflib.rxq3.rxq_fl0.buf_size |
buffer size |
dev.igb.[num].iflib.rxq3.rxq_fl0.cidx |
Consumer Index |
dev.igb.[num].iflib.rxq3.rxq_fl0.credits |
credits available |
dev.igb.[num].iflib.rxq3.rxq_fl0.pidx |
Producer Index |
dev.igb.[num].iflib.rxq4 |
Queue Name |
dev.igb.[num].iflib.rxq4.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.rxq4.rxq_fl0 |
freelist Name |
dev.igb.[num].iflib.rxq4.rxq_fl0.buf_size |
buffer size |
dev.igb.[num].iflib.rxq4.rxq_fl0.cidx |
Consumer Index |
dev.igb.[num].iflib.rxq4.rxq_fl0.credits |
credits available |
dev.igb.[num].iflib.rxq4.rxq_fl0.pidx |
Producer Index |
dev.igb.[num].iflib.rxq5 |
Queue Name |
dev.igb.[num].iflib.rxq5.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.rxq5.rxq_fl0 |
freelist Name |
dev.igb.[num].iflib.rxq5.rxq_fl0.buf_size |
buffer size |
dev.igb.[num].iflib.rxq5.rxq_fl0.cidx |
Consumer Index |
dev.igb.[num].iflib.rxq5.rxq_fl0.credits |
credits available |
dev.igb.[num].iflib.rxq5.rxq_fl0.pidx |
Producer Index |
dev.igb.[num].iflib.rxq6 |
Queue Name |
dev.igb.[num].iflib.rxq6.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.rxq6.rxq_fl0 |
freelist Name |
dev.igb.[num].iflib.rxq6.rxq_fl0.buf_size |
buffer size |
dev.igb.[num].iflib.rxq6.rxq_fl0.cidx |
Consumer Index |
dev.igb.[num].iflib.rxq6.rxq_fl0.credits |
credits available |
dev.igb.[num].iflib.rxq6.rxq_fl0.pidx |
Producer Index |
dev.igb.[num].iflib.rxq7 |
Queue Name |
dev.igb.[num].iflib.rxq7.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.rxq7.rxq_fl0 |
freelist Name |
dev.igb.[num].iflib.rxq7.rxq_fl0.buf_size |
buffer size |
dev.igb.[num].iflib.rxq7.rxq_fl0.cidx |
Consumer Index |
dev.igb.[num].iflib.rxq7.rxq_fl0.credits |
credits available |
dev.igb.[num].iflib.rxq7.rxq_fl0.pidx |
Producer Index |
dev.igb.[num].iflib.separate_txrx |
use separate cores for TX and RX |
dev.igb.[num].iflib.tx_abdicate |
cause TX to abdicate instead of running to completion |
dev.igb.[num].iflib.txq0 |
Queue Name |
dev.igb.[num].iflib.txq0.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.txq0.m_pullups |
# of times m_pullup was called |
dev.igb.[num].iflib.txq0.mbuf_defrag |
# of times m_defrag was called |
dev.igb.[num].iflib.txq0.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igb.[num].iflib.txq0.no_desc_avail |
# of times no descriptors were available |
dev.igb.[num].iflib.txq0.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igb.[num].iflib.txq0.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igb.[num].iflib.txq0.r_drops |
# of drops in the mp_ring for this queue |
dev.igb.[num].iflib.txq0.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igb.[num].iflib.txq0.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igb.[num].iflib.txq0.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igb.[num].iflib.txq0.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igb.[num].iflib.txq0.ring_state |
soft ring state |
dev.igb.[num].iflib.txq0.tx_map_failed |
# of times DMA map failed |
dev.igb.[num].iflib.txq0.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igb.[num].iflib.txq0.txq_cidx |
Consumer Index |
dev.igb.[num].iflib.txq0.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igb.[num].iflib.txq0.txq_cleaned |
total cleaned |
dev.igb.[num].iflib.txq0.txq_in_use |
descriptors in use |
dev.igb.[num].iflib.txq0.txq_pidx |
Producer Index |
dev.igb.[num].iflib.txq0.txq_processed |
descriptors procesed for clean |
dev.igb.[num].iflib.txq1 |
Queue Name |
dev.igb.[num].iflib.txq1.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.txq1.m_pullups |
# of times m_pullup was called |
dev.igb.[num].iflib.txq1.mbuf_defrag |
# of times m_defrag was called |
dev.igb.[num].iflib.txq1.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igb.[num].iflib.txq1.no_desc_avail |
# of times no descriptors were available |
dev.igb.[num].iflib.txq1.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igb.[num].iflib.txq1.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igb.[num].iflib.txq1.r_drops |
# of drops in the mp_ring for this queue |
dev.igb.[num].iflib.txq1.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igb.[num].iflib.txq1.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igb.[num].iflib.txq1.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igb.[num].iflib.txq1.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igb.[num].iflib.txq1.ring_state |
soft ring state |
dev.igb.[num].iflib.txq1.tx_map_failed |
# of times DMA map failed |
dev.igb.[num].iflib.txq1.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igb.[num].iflib.txq1.txq_cidx |
Consumer Index |
dev.igb.[num].iflib.txq1.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igb.[num].iflib.txq1.txq_cleaned |
total cleaned |
dev.igb.[num].iflib.txq1.txq_in_use |
descriptors in use |
dev.igb.[num].iflib.txq1.txq_pidx |
Producer Index |
dev.igb.[num].iflib.txq1.txq_processed |
descriptors procesed for clean |
dev.igb.[num].iflib.txq2 |
Queue Name |
dev.igb.[num].iflib.txq2.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.txq2.m_pullups |
# of times m_pullup was called |
dev.igb.[num].iflib.txq2.mbuf_defrag |
# of times m_defrag was called |
dev.igb.[num].iflib.txq2.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igb.[num].iflib.txq2.no_desc_avail |
# of times no descriptors were available |
dev.igb.[num].iflib.txq2.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igb.[num].iflib.txq2.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igb.[num].iflib.txq2.r_drops |
# of drops in the mp_ring for this queue |
dev.igb.[num].iflib.txq2.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igb.[num].iflib.txq2.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igb.[num].iflib.txq2.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igb.[num].iflib.txq2.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igb.[num].iflib.txq2.ring_state |
soft ring state |
dev.igb.[num].iflib.txq2.tx_map_failed |
# of times DMA map failed |
dev.igb.[num].iflib.txq2.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igb.[num].iflib.txq2.txq_cidx |
Consumer Index |
dev.igb.[num].iflib.txq2.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igb.[num].iflib.txq2.txq_cleaned |
total cleaned |
dev.igb.[num].iflib.txq2.txq_in_use |
descriptors in use |
dev.igb.[num].iflib.txq2.txq_pidx |
Producer Index |
dev.igb.[num].iflib.txq2.txq_processed |
descriptors procesed for clean |
dev.igb.[num].iflib.txq3 |
Queue Name |
dev.igb.[num].iflib.txq3.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.txq3.m_pullups |
# of times m_pullup was called |
dev.igb.[num].iflib.txq3.mbuf_defrag |
# of times m_defrag was called |
dev.igb.[num].iflib.txq3.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igb.[num].iflib.txq3.no_desc_avail |
# of times no descriptors were available |
dev.igb.[num].iflib.txq3.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igb.[num].iflib.txq3.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igb.[num].iflib.txq3.r_drops |
# of drops in the mp_ring for this queue |
dev.igb.[num].iflib.txq3.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igb.[num].iflib.txq3.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igb.[num].iflib.txq3.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igb.[num].iflib.txq3.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igb.[num].iflib.txq3.ring_state |
soft ring state |
dev.igb.[num].iflib.txq3.tx_map_failed |
# of times DMA map failed |
dev.igb.[num].iflib.txq3.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igb.[num].iflib.txq3.txq_cidx |
Consumer Index |
dev.igb.[num].iflib.txq3.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igb.[num].iflib.txq3.txq_cleaned |
total cleaned |
dev.igb.[num].iflib.txq3.txq_in_use |
descriptors in use |
dev.igb.[num].iflib.txq3.txq_pidx |
Producer Index |
dev.igb.[num].iflib.txq3.txq_processed |
descriptors procesed for clean |
dev.igb.[num].iflib.txq4 |
Queue Name |
dev.igb.[num].iflib.txq4.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.txq4.m_pullups |
# of times m_pullup was called |
dev.igb.[num].iflib.txq4.mbuf_defrag |
# of times m_defrag was called |
dev.igb.[num].iflib.txq4.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igb.[num].iflib.txq4.no_desc_avail |
# of times no descriptors were available |
dev.igb.[num].iflib.txq4.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igb.[num].iflib.txq4.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igb.[num].iflib.txq4.r_drops |
# of drops in the mp_ring for this queue |
dev.igb.[num].iflib.txq4.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igb.[num].iflib.txq4.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igb.[num].iflib.txq4.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igb.[num].iflib.txq4.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igb.[num].iflib.txq4.ring_state |
soft ring state |
dev.igb.[num].iflib.txq4.tx_map_failed |
# of times DMA map failed |
dev.igb.[num].iflib.txq4.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igb.[num].iflib.txq4.txq_cidx |
Consumer Index |
dev.igb.[num].iflib.txq4.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igb.[num].iflib.txq4.txq_cleaned |
total cleaned |
dev.igb.[num].iflib.txq4.txq_in_use |
descriptors in use |
dev.igb.[num].iflib.txq4.txq_pidx |
Producer Index |
dev.igb.[num].iflib.txq4.txq_processed |
descriptors procesed for clean |
dev.igb.[num].iflib.txq5 |
Queue Name |
dev.igb.[num].iflib.txq5.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.txq5.m_pullups |
# of times m_pullup was called |
dev.igb.[num].iflib.txq5.mbuf_defrag |
# of times m_defrag was called |
dev.igb.[num].iflib.txq5.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igb.[num].iflib.txq5.no_desc_avail |
# of times no descriptors were available |
dev.igb.[num].iflib.txq5.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igb.[num].iflib.txq5.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igb.[num].iflib.txq5.r_drops |
# of drops in the mp_ring for this queue |
dev.igb.[num].iflib.txq5.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igb.[num].iflib.txq5.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igb.[num].iflib.txq5.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igb.[num].iflib.txq5.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igb.[num].iflib.txq5.ring_state |
soft ring state |
dev.igb.[num].iflib.txq5.tx_map_failed |
# of times DMA map failed |
dev.igb.[num].iflib.txq5.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igb.[num].iflib.txq5.txq_cidx |
Consumer Index |
dev.igb.[num].iflib.txq5.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igb.[num].iflib.txq5.txq_cleaned |
total cleaned |
dev.igb.[num].iflib.txq5.txq_in_use |
descriptors in use |
dev.igb.[num].iflib.txq5.txq_pidx |
Producer Index |
dev.igb.[num].iflib.txq5.txq_processed |
descriptors procesed for clean |
dev.igb.[num].iflib.txq6 |
Queue Name |
dev.igb.[num].iflib.txq6.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.txq6.m_pullups |
# of times m_pullup was called |
dev.igb.[num].iflib.txq6.mbuf_defrag |
# of times m_defrag was called |
dev.igb.[num].iflib.txq6.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igb.[num].iflib.txq6.no_desc_avail |
# of times no descriptors were available |
dev.igb.[num].iflib.txq6.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igb.[num].iflib.txq6.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igb.[num].iflib.txq6.r_drops |
# of drops in the mp_ring for this queue |
dev.igb.[num].iflib.txq6.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igb.[num].iflib.txq6.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igb.[num].iflib.txq6.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igb.[num].iflib.txq6.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igb.[num].iflib.txq6.ring_state |
soft ring state |
dev.igb.[num].iflib.txq6.tx_map_failed |
# of times DMA map failed |
dev.igb.[num].iflib.txq6.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igb.[num].iflib.txq6.txq_cidx |
Consumer Index |
dev.igb.[num].iflib.txq6.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igb.[num].iflib.txq6.txq_cleaned |
total cleaned |
dev.igb.[num].iflib.txq6.txq_in_use |
descriptors in use |
dev.igb.[num].iflib.txq6.txq_pidx |
Producer Index |
dev.igb.[num].iflib.txq6.txq_processed |
descriptors procesed for clean |
dev.igb.[num].iflib.txq7 |
Queue Name |
dev.igb.[num].iflib.txq7.cpu |
cpu this queue is bound to |
dev.igb.[num].iflib.txq7.m_pullups |
# of times m_pullup was called |
dev.igb.[num].iflib.txq7.mbuf_defrag |
# of times m_defrag was called |
dev.igb.[num].iflib.txq7.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igb.[num].iflib.txq7.no_desc_avail |
# of times no descriptors were available |
dev.igb.[num].iflib.txq7.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igb.[num].iflib.txq7.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igb.[num].iflib.txq7.r_drops |
# of drops in the mp_ring for this queue |
dev.igb.[num].iflib.txq7.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igb.[num].iflib.txq7.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igb.[num].iflib.txq7.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igb.[num].iflib.txq7.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igb.[num].iflib.txq7.ring_state |
soft ring state |
dev.igb.[num].iflib.txq7.tx_map_failed |
# of times DMA map failed |
dev.igb.[num].iflib.txq7.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igb.[num].iflib.txq7.txq_cidx |
Consumer Index |
dev.igb.[num].iflib.txq7.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igb.[num].iflib.txq7.txq_cleaned |
total cleaned |
dev.igb.[num].iflib.txq7.txq_in_use |
descriptors in use |
dev.igb.[num].iflib.txq7.txq_pidx |
Producer Index |
dev.igb.[num].iflib.txq7.txq_processed |
descriptors procesed for clean |
dev.igb.[num].iflib.use_extra_msix_vectors |
attempt to reserve the given number of extra MSI-X vectors during driver load for the creation of additional interfaces later |
dev.igb.[num].iflib.use_logical_cores |
try to make use of logical cores for TX and RX |
dev.igb.[num].interrupts |
Interrupt Statistics |
dev.igb.[num].interrupts.asserts |
Interrupt Assertion Count |
dev.igb.[num].interrupts.rx_abs_timer |
Interrupt Cause Rx Abs Timer Expire Count |
dev.igb.[num].interrupts.rx_desc_min_thresh |
Interrupt Cause Rx Desc Min Thresh Count |
dev.igb.[num].interrupts.rx_overrun |
Interrupt Cause Receiver Overrun Count |
dev.igb.[num].interrupts.rx_pkt_timer |
Interrupt Cause Rx Pkt Timer Expire Count |
dev.igb.[num].interrupts.tx_abs_timer |
Interrupt Cause Tx Abs Timer Expire Count |
dev.igb.[num].interrupts.tx_pkt_timer |
Interrupt Cause Tx Pkt Timer Expire Count |
dev.igb.[num].interrupts.tx_queue_empty |
Interrupt Cause Tx Queue Empty Count |
dev.igb.[num].interrupts.tx_queue_min_thresh |
Interrupt Cause Tx Queue Min Thresh Count |
dev.igb.[num].link_irq |
Link MSI-X IRQ Handled |
dev.igb.[num].mac_stats |
Statistics |
dev.igb.[num].mac_stats.alignment_errs |
Alignment Errors |
dev.igb.[num].mac_stats.bcast_pkts_recvd |
Broadcast Packets Received |
dev.igb.[num].mac_stats.bcast_pkts_txd |
Broadcast Packets Transmitted |
dev.igb.[num].mac_stats.coll_ext_errs |
Collision/Carrier extension errors |
dev.igb.[num].mac_stats.collision_count |
Collision Count |
dev.igb.[num].mac_stats.crc_errs |
CRC errors |
dev.igb.[num].mac_stats.defer_count |
Defer Count |
dev.igb.[num].mac_stats.excess_coll |
Excessive collisions |
dev.igb.[num].mac_stats.good_octets_recvd |
Good Octets Received |
dev.igb.[num].mac_stats.good_octets_txd |
Good Octets Transmitted |
dev.igb.[num].mac_stats.good_pkts_recvd |
Good Packets Received |
dev.igb.[num].mac_stats.good_pkts_txd |
Good Packets Transmitted |
dev.igb.[num].mac_stats.late_coll |
Late collisions |
dev.igb.[num].mac_stats.mcast_pkts_recvd |
Multicast Packets Received |
dev.igb.[num].mac_stats.mcast_pkts_txd |
Multicast Packets Transmitted |
dev.igb.[num].mac_stats.mgmt_pkts_drop |
Management Packets Dropped |
dev.igb.[num].mac_stats.mgmt_pkts_recvd |
Management Packets Received |
dev.igb.[num].mac_stats.mgmt_pkts_txd |
Management Packets Transmitted |
dev.igb.[num].mac_stats.missed_packets |
Missed Packets |
dev.igb.[num].mac_stats.multiple_coll |
Multiple collisions |
dev.igb.[num].mac_stats.recv_errs |
Receive Errors |
dev.igb.[num].mac_stats.recv_fragmented |
Fragmented Packets Received |
dev.igb.[num].mac_stats.recv_jabber |
Recevied Jabber |
dev.igb.[num].mac_stats.recv_length_errors |
Receive Length Errors |
dev.igb.[num].mac_stats.recv_no_buff |
Receive No Buffers |
dev.igb.[num].mac_stats.recv_oversize |
Oversized Packets Received |
dev.igb.[num].mac_stats.recv_undersize |
Receive Undersize |
dev.igb.[num].mac_stats.rx_frames_1024_1522 |
1023-1522 byte frames received |
dev.igb.[num].mac_stats.rx_frames_128_255 |
128-255 byte frames received |
dev.igb.[num].mac_stats.rx_frames_256_511 |
256-511 byte frames received |
dev.igb.[num].mac_stats.rx_frames_512_1023 |
512-1023 byte frames received |
dev.igb.[num].mac_stats.rx_frames_64 |
64 byte frames received |
dev.igb.[num].mac_stats.rx_frames_65_127 |
65-127 byte frames received |
dev.igb.[num].mac_stats.sequence_errors |
Sequence Errors |
dev.igb.[num].mac_stats.single_coll |
Single collisions |
dev.igb.[num].mac_stats.symbol_errors |
Symbol Errors |
dev.igb.[num].mac_stats.total_pkts_recvd |
Total Packets Received |
dev.igb.[num].mac_stats.total_pkts_txd |
Total Packets Transmitted |
dev.igb.[num].mac_stats.tso_ctx_fail |
TSO Contexts Failed |
dev.igb.[num].mac_stats.tso_txd |
TSO Contexts Transmitted |
dev.igb.[num].mac_stats.tx_frames_1024_1522 |
1024-1522 byte frames transmitted |
dev.igb.[num].mac_stats.tx_frames_128_255 |
128-255 byte frames transmitted |
dev.igb.[num].mac_stats.tx_frames_256_511 |
256-511 byte frames transmitted |
dev.igb.[num].mac_stats.tx_frames_512_1023 |
512-1023 byte frames transmitted |
dev.igb.[num].mac_stats.tx_frames_64 |
64 byte frames transmitted |
dev.igb.[num].mac_stats.tx_frames_65_127 |
65-127 byte frames transmitted |
dev.igb.[num].mac_stats.unsupported_fc_recvd |
Unsupported Flow Control Received |
dev.igb.[num].mac_stats.xoff_recvd |
XOFF Received |
dev.igb.[num].mac_stats.xoff_txd |
XOFF Transmitted |
dev.igb.[num].mac_stats.xon_recvd |
XON Received |
dev.igb.[num].mac_stats.xon_txd |
XON Transmitted |
dev.igb.[num].nvm |
NVM Information |
dev.igb.[num].queue_rx_0 |
RX Queue Name |
dev.igb.[num].queue_rx_0.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_rx_0.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igb.[num].queue_rx_0.rxd_head |
Receive Descriptor Head |
dev.igb.[num].queue_rx_0.rxd_tail |
Receive Descriptor Tail |
dev.igb.[num].queue_rx_1 |
RX Queue Name |
dev.igb.[num].queue_rx_1.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_rx_1.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igb.[num].queue_rx_1.rxd_head |
Receive Descriptor Head |
dev.igb.[num].queue_rx_1.rxd_tail |
Receive Descriptor Tail |
dev.igb.[num].queue_rx_2 |
RX Queue Name |
dev.igb.[num].queue_rx_2.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_rx_2.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igb.[num].queue_rx_2.rxd_head |
Receive Descriptor Head |
dev.igb.[num].queue_rx_2.rxd_tail |
Receive Descriptor Tail |
dev.igb.[num].queue_rx_3 |
RX Queue Name |
dev.igb.[num].queue_rx_3.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_rx_3.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igb.[num].queue_rx_3.rxd_head |
Receive Descriptor Head |
dev.igb.[num].queue_rx_3.rxd_tail |
Receive Descriptor Tail |
dev.igb.[num].queue_rx_4 |
RX Queue Name |
dev.igb.[num].queue_rx_4.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_rx_4.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igb.[num].queue_rx_4.rxd_head |
Receive Descriptor Head |
dev.igb.[num].queue_rx_4.rxd_tail |
Receive Descriptor Tail |
dev.igb.[num].queue_rx_5 |
RX Queue Name |
dev.igb.[num].queue_rx_5.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_rx_5.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igb.[num].queue_rx_5.rxd_head |
Receive Descriptor Head |
dev.igb.[num].queue_rx_5.rxd_tail |
Receive Descriptor Tail |
dev.igb.[num].queue_rx_6 |
RX Queue Name |
dev.igb.[num].queue_rx_6.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_rx_6.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igb.[num].queue_rx_6.rxd_head |
Receive Descriptor Head |
dev.igb.[num].queue_rx_6.rxd_tail |
Receive Descriptor Tail |
dev.igb.[num].queue_rx_7 |
RX Queue Name |
dev.igb.[num].queue_rx_7.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_rx_7.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igb.[num].queue_rx_7.rxd_head |
Receive Descriptor Head |
dev.igb.[num].queue_rx_7.rxd_tail |
Receive Descriptor Tail |
dev.igb.[num].queue_tx_0 |
TX Queue Name |
dev.igb.[num].queue_tx_0.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_tx_0.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igb.[num].queue_tx_0.txd_head |
Transmit Descriptor Head |
dev.igb.[num].queue_tx_0.txd_tail |
Transmit Descriptor Tail |
dev.igb.[num].queue_tx_1 |
TX Queue Name |
dev.igb.[num].queue_tx_1.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_tx_1.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igb.[num].queue_tx_1.txd_head |
Transmit Descriptor Head |
dev.igb.[num].queue_tx_1.txd_tail |
Transmit Descriptor Tail |
dev.igb.[num].queue_tx_2 |
TX Queue Name |
dev.igb.[num].queue_tx_2.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_tx_2.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igb.[num].queue_tx_2.txd_head |
Transmit Descriptor Head |
dev.igb.[num].queue_tx_2.txd_tail |
Transmit Descriptor Tail |
dev.igb.[num].queue_tx_3 |
TX Queue Name |
dev.igb.[num].queue_tx_3.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_tx_3.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igb.[num].queue_tx_3.txd_head |
Transmit Descriptor Head |
dev.igb.[num].queue_tx_3.txd_tail |
Transmit Descriptor Tail |
dev.igb.[num].queue_tx_4 |
TX Queue Name |
dev.igb.[num].queue_tx_4.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_tx_4.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igb.[num].queue_tx_4.txd_head |
Transmit Descriptor Head |
dev.igb.[num].queue_tx_4.txd_tail |
Transmit Descriptor Tail |
dev.igb.[num].queue_tx_5 |
TX Queue Name |
dev.igb.[num].queue_tx_5.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_tx_5.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igb.[num].queue_tx_5.txd_head |
Transmit Descriptor Head |
dev.igb.[num].queue_tx_5.txd_tail |
Transmit Descriptor Tail |
dev.igb.[num].queue_tx_6 |
TX Queue Name |
dev.igb.[num].queue_tx_6.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_tx_6.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igb.[num].queue_tx_6.txd_head |
Transmit Descriptor Head |
dev.igb.[num].queue_tx_6.txd_tail |
Transmit Descriptor Tail |
dev.igb.[num].queue_tx_7 |
TX Queue Name |
dev.igb.[num].queue_tx_7.interrupt_rate |
Interrupt Rate |
dev.igb.[num].queue_tx_7.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igb.[num].queue_tx_7.txd_head |
Transmit Descriptor Head |
dev.igb.[num].queue_tx_7.txd_tail |
Transmit Descriptor Tail |
dev.igb.[num].reg_dump |
Dump Registers |
dev.igb.[num].rs_dump |
Dump RS indexes |
dev.igb.[num].rx_control |
Receiver Control Register |
dev.igb.[num].rx_overruns |
RX overruns |
dev.igb.[num].watchdog_timeouts |
Watchdog timeouts |
dev.igc |
|
dev.igc.%parent |
parent class |
dev.igc.[num] |
|
dev.igc.[num].%desc |
device description |
dev.igc.[num].%driver |
device driver name |
dev.igc.[num].%iommu |
iommu unit handling the device requests |
dev.igc.[num].%location |
device location relative to parent |
dev.igc.[num].%parent |
parent device |
dev.igc.[num].%pnpinfo |
device identification |
dev.igc.[num].debug |
Debug Information |
dev.igc.[num].device_control |
Device Control Register |
dev.igc.[num].dmac |
DMA Coalesce |
dev.igc.[num].dropped |
Driver dropped packets |
dev.igc.[num].eee_control |
Disable Energy Efficient Ethernet |
dev.igc.[num].enable_aim |
Interrupt Moderation (1=normal, 2=lowlatency) |
dev.igc.[num].fc |
Flow Control |
dev.igc.[num].fc_high_water |
Flow Control High Watermark |
dev.igc.[num].fc_low_water |
Flow Control Low Watermark |
dev.igc.[num].fw_version |
Prints FW/NVM Versions |
dev.igc.[num].iflib |
IFLIB fields |
dev.igc.[num].iflib.allocated_msix_vectors |
total # of MSI-X vectors allocated by driver |
dev.igc.[num].iflib.core_offset |
offset to start using cores at |
dev.igc.[num].iflib.disable_msix |
disable MSI-X (default 0) |
dev.igc.[num].iflib.driver_version |
driver version |
dev.igc.[num].iflib.override_nrxds |
list of # of RX descriptors to use, 0 = use default # |
dev.igc.[num].iflib.override_nrxqs |
# of rxqs to use, 0 => use default # |
dev.igc.[num].iflib.override_ntxds |
list of # of TX descriptors to use, 0 = use default # |
dev.igc.[num].iflib.override_ntxqs |
# of txqs to use, 0 => use default # |
dev.igc.[num].iflib.override_qs_enable |
permit #txq != #rxq |
dev.igc.[num].iflib.rx_budget |
set the RX budget |
dev.igc.[num].iflib.rxq0 |
Queue Name |
dev.igc.[num].iflib.rxq0.cpu |
cpu this queue is bound to |
dev.igc.[num].iflib.rxq0.rxq_fl0 |
freelist Name |
dev.igc.[num].iflib.rxq0.rxq_fl0.buf_size |
buffer size |
dev.igc.[num].iflib.rxq0.rxq_fl0.cidx |
Consumer Index |
dev.igc.[num].iflib.rxq0.rxq_fl0.credits |
credits available |
dev.igc.[num].iflib.rxq0.rxq_fl0.pidx |
Producer Index |
dev.igc.[num].iflib.rxq1 |
Queue Name |
dev.igc.[num].iflib.rxq1.cpu |
cpu this queue is bound to |
dev.igc.[num].iflib.rxq1.rxq_fl0 |
freelist Name |
dev.igc.[num].iflib.rxq1.rxq_fl0.buf_size |
buffer size |
dev.igc.[num].iflib.rxq1.rxq_fl0.cidx |
Consumer Index |
dev.igc.[num].iflib.rxq1.rxq_fl0.credits |
credits available |
dev.igc.[num].iflib.rxq1.rxq_fl0.pidx |
Producer Index |
dev.igc.[num].iflib.rxq2 |
Queue Name |
dev.igc.[num].iflib.rxq2.cpu |
cpu this queue is bound to |
dev.igc.[num].iflib.rxq2.rxq_fl0 |
freelist Name |
dev.igc.[num].iflib.rxq2.rxq_fl0.buf_size |
buffer size |
dev.igc.[num].iflib.rxq2.rxq_fl0.cidx |
Consumer Index |
dev.igc.[num].iflib.rxq2.rxq_fl0.credits |
credits available |
dev.igc.[num].iflib.rxq2.rxq_fl0.pidx |
Producer Index |
dev.igc.[num].iflib.rxq3 |
Queue Name |
dev.igc.[num].iflib.rxq3.cpu |
cpu this queue is bound to |
dev.igc.[num].iflib.rxq3.rxq_fl0 |
freelist Name |
dev.igc.[num].iflib.rxq3.rxq_fl0.buf_size |
buffer size |
dev.igc.[num].iflib.rxq3.rxq_fl0.cidx |
Consumer Index |
dev.igc.[num].iflib.rxq3.rxq_fl0.credits |
credits available |
dev.igc.[num].iflib.rxq3.rxq_fl0.pidx |
Producer Index |
dev.igc.[num].iflib.separate_txrx |
use separate cores for TX and RX |
dev.igc.[num].iflib.tx_abdicate |
cause TX to abdicate instead of running to completion |
dev.igc.[num].iflib.txq0 |
Queue Name |
dev.igc.[num].iflib.txq0.cpu |
cpu this queue is bound to |
dev.igc.[num].iflib.txq0.m_pullups |
# of times m_pullup was called |
dev.igc.[num].iflib.txq0.mbuf_defrag |
# of times m_defrag was called |
dev.igc.[num].iflib.txq0.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igc.[num].iflib.txq0.no_desc_avail |
# of times no descriptors were available |
dev.igc.[num].iflib.txq0.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igc.[num].iflib.txq0.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igc.[num].iflib.txq0.r_drops |
# of drops in the mp_ring for this queue |
dev.igc.[num].iflib.txq0.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igc.[num].iflib.txq0.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igc.[num].iflib.txq0.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igc.[num].iflib.txq0.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igc.[num].iflib.txq0.ring_state |
soft ring state |
dev.igc.[num].iflib.txq0.tx_map_failed |
# of times DMA map failed |
dev.igc.[num].iflib.txq0.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igc.[num].iflib.txq0.txq_cidx |
Consumer Index |
dev.igc.[num].iflib.txq0.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igc.[num].iflib.txq0.txq_cleaned |
total cleaned |
dev.igc.[num].iflib.txq0.txq_in_use |
descriptors in use |
dev.igc.[num].iflib.txq0.txq_pidx |
Producer Index |
dev.igc.[num].iflib.txq0.txq_processed |
descriptors procesed for clean |
dev.igc.[num].iflib.txq1 |
Queue Name |
dev.igc.[num].iflib.txq1.cpu |
cpu this queue is bound to |
dev.igc.[num].iflib.txq1.m_pullups |
# of times m_pullup was called |
dev.igc.[num].iflib.txq1.mbuf_defrag |
# of times m_defrag was called |
dev.igc.[num].iflib.txq1.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igc.[num].iflib.txq1.no_desc_avail |
# of times no descriptors were available |
dev.igc.[num].iflib.txq1.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igc.[num].iflib.txq1.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igc.[num].iflib.txq1.r_drops |
# of drops in the mp_ring for this queue |
dev.igc.[num].iflib.txq1.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igc.[num].iflib.txq1.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igc.[num].iflib.txq1.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igc.[num].iflib.txq1.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igc.[num].iflib.txq1.ring_state |
soft ring state |
dev.igc.[num].iflib.txq1.tx_map_failed |
# of times DMA map failed |
dev.igc.[num].iflib.txq1.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igc.[num].iflib.txq1.txq_cidx |
Consumer Index |
dev.igc.[num].iflib.txq1.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igc.[num].iflib.txq1.txq_cleaned |
total cleaned |
dev.igc.[num].iflib.txq1.txq_in_use |
descriptors in use |
dev.igc.[num].iflib.txq1.txq_pidx |
Producer Index |
dev.igc.[num].iflib.txq1.txq_processed |
descriptors procesed for clean |
dev.igc.[num].iflib.txq2 |
Queue Name |
dev.igc.[num].iflib.txq2.cpu |
cpu this queue is bound to |
dev.igc.[num].iflib.txq2.m_pullups |
# of times m_pullup was called |
dev.igc.[num].iflib.txq2.mbuf_defrag |
# of times m_defrag was called |
dev.igc.[num].iflib.txq2.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igc.[num].iflib.txq2.no_desc_avail |
# of times no descriptors were available |
dev.igc.[num].iflib.txq2.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igc.[num].iflib.txq2.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igc.[num].iflib.txq2.r_drops |
# of drops in the mp_ring for this queue |
dev.igc.[num].iflib.txq2.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igc.[num].iflib.txq2.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igc.[num].iflib.txq2.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igc.[num].iflib.txq2.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igc.[num].iflib.txq2.ring_state |
soft ring state |
dev.igc.[num].iflib.txq2.tx_map_failed |
# of times DMA map failed |
dev.igc.[num].iflib.txq2.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igc.[num].iflib.txq2.txq_cidx |
Consumer Index |
dev.igc.[num].iflib.txq2.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igc.[num].iflib.txq2.txq_cleaned |
total cleaned |
dev.igc.[num].iflib.txq2.txq_in_use |
descriptors in use |
dev.igc.[num].iflib.txq2.txq_pidx |
Producer Index |
dev.igc.[num].iflib.txq2.txq_processed |
descriptors procesed for clean |
dev.igc.[num].iflib.txq3 |
Queue Name |
dev.igc.[num].iflib.txq3.cpu |
cpu this queue is bound to |
dev.igc.[num].iflib.txq3.m_pullups |
# of times m_pullup was called |
dev.igc.[num].iflib.txq3.mbuf_defrag |
# of times m_defrag was called |
dev.igc.[num].iflib.txq3.mbuf_defrag_failed |
# of times m_defrag failed |
dev.igc.[num].iflib.txq3.no_desc_avail |
# of times no descriptors were available |
dev.igc.[num].iflib.txq3.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.igc.[num].iflib.txq3.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.igc.[num].iflib.txq3.r_drops |
# of drops in the mp_ring for this queue |
dev.igc.[num].iflib.txq3.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.igc.[num].iflib.txq3.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.igc.[num].iflib.txq3.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.igc.[num].iflib.txq3.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.igc.[num].iflib.txq3.ring_state |
soft ring state |
dev.igc.[num].iflib.txq3.tx_map_failed |
# of times DMA map failed |
dev.igc.[num].iflib.txq3.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.igc.[num].iflib.txq3.txq_cidx |
Consumer Index |
dev.igc.[num].iflib.txq3.txq_cidx_processed |
Consumer Index seen by credit update |
dev.igc.[num].iflib.txq3.txq_cleaned |
total cleaned |
dev.igc.[num].iflib.txq3.txq_in_use |
descriptors in use |
dev.igc.[num].iflib.txq3.txq_pidx |
Producer Index |
dev.igc.[num].iflib.txq3.txq_processed |
descriptors procesed for clean |
dev.igc.[num].iflib.use_extra_msix_vectors |
attempt to reserve the given number of extra MSI-X vectors during driver load for the creation of additional interfaces later |
dev.igc.[num].iflib.use_logical_cores |
try to make use of logical cores for TX and RX |
dev.igc.[num].interrupts |
Interrupt Statistics |
dev.igc.[num].interrupts.asserts |
Interrupt Assertion Count |
dev.igc.[num].interrupts.rx_desc_min_thresh |
Rx Desc Min Thresh Count |
dev.igc.[num].link_irq |
Link MSI-X IRQ Handled |
dev.igc.[num].mac_stats |
Statistics |
dev.igc.[num].mac_stats.alignment_errs |
Alignment Errors |
dev.igc.[num].mac_stats.bcast_pkts_recvd |
Broadcast Packets Received |
dev.igc.[num].mac_stats.bcast_pkts_txd |
Broadcast Packets Transmitted |
dev.igc.[num].mac_stats.collision_count |
Collision Count |
dev.igc.[num].mac_stats.crc_errs |
CRC errors |
dev.igc.[num].mac_stats.defer_count |
Defer Count |
dev.igc.[num].mac_stats.excess_coll |
Excessive collisions |
dev.igc.[num].mac_stats.good_octets_recvd |
Good Octets Received |
dev.igc.[num].mac_stats.good_octets_txd |
Good Octets Transmitted |
dev.igc.[num].mac_stats.good_pkts_recvd |
Good Packets Received |
dev.igc.[num].mac_stats.good_pkts_txd |
Good Packets Transmitted |
dev.igc.[num].mac_stats.late_coll |
Late collisions |
dev.igc.[num].mac_stats.mcast_pkts_recvd |
Multicast Packets Received |
dev.igc.[num].mac_stats.mcast_pkts_txd |
Multicast Packets Transmitted |
dev.igc.[num].mac_stats.mgmt_pkts_drop |
Management Packets Dropped |
dev.igc.[num].mac_stats.mgmt_pkts_recvd |
Management Packets Received |
dev.igc.[num].mac_stats.mgmt_pkts_txd |
Management Packets Transmitted |
dev.igc.[num].mac_stats.missed_packets |
Missed Packets |
dev.igc.[num].mac_stats.multiple_coll |
Multiple collisions |
dev.igc.[num].mac_stats.recv_errs |
Receive Errors |
dev.igc.[num].mac_stats.recv_fragmented |
Fragmented Packets Received |
dev.igc.[num].mac_stats.recv_jabber |
Recevied Jabber |
dev.igc.[num].mac_stats.recv_length_errors |
Receive Length Errors |
dev.igc.[num].mac_stats.recv_no_buff |
Receive No Buffers |
dev.igc.[num].mac_stats.recv_oversize |
Oversized Packets Received |
dev.igc.[num].mac_stats.recv_undersize |
Receive Undersize |
dev.igc.[num].mac_stats.rx_frames_1024_1522 |
1023-1522 byte frames received |
dev.igc.[num].mac_stats.rx_frames_128_255 |
128-255 byte frames received |
dev.igc.[num].mac_stats.rx_frames_256_511 |
256-511 byte frames received |
dev.igc.[num].mac_stats.rx_frames_512_1023 |
512-1023 byte frames received |
dev.igc.[num].mac_stats.rx_frames_64 |
64 byte frames received |
dev.igc.[num].mac_stats.rx_frames_65_127 |
65-127 byte frames received |
dev.igc.[num].mac_stats.sequence_errors |
Sequence Errors |
dev.igc.[num].mac_stats.single_coll |
Single collisions |
dev.igc.[num].mac_stats.symbol_errors |
Symbol Errors |
dev.igc.[num].mac_stats.total_pkts_recvd |
Total Packets Received |
dev.igc.[num].mac_stats.total_pkts_txd |
Total Packets Transmitted |
dev.igc.[num].mac_stats.tso_txd |
TSO Contexts Transmitted |
dev.igc.[num].mac_stats.tx_frames_1024_1522 |
1024-1522 byte frames transmitted |
dev.igc.[num].mac_stats.tx_frames_128_255 |
128-255 byte frames transmitted |
dev.igc.[num].mac_stats.tx_frames_256_511 |
256-511 byte frames transmitted |
dev.igc.[num].mac_stats.tx_frames_512_1023 |
512-1023 byte frames transmitted |
dev.igc.[num].mac_stats.tx_frames_64 |
64 byte frames transmitted |
dev.igc.[num].mac_stats.tx_frames_65_127 |
65-127 byte frames transmitted |
dev.igc.[num].mac_stats.unsupported_fc_recvd |
Unsupported Flow Control Received |
dev.igc.[num].mac_stats.xoff_recvd |
XOFF Received |
dev.igc.[num].mac_stats.xoff_txd |
XOFF Transmitted |
dev.igc.[num].mac_stats.xon_recvd |
XON Received |
dev.igc.[num].mac_stats.xon_txd |
XON Transmitted |
dev.igc.[num].nvm |
NVM Information |
dev.igc.[num].queue_rx_0 |
RX Queue Name |
dev.igc.[num].queue_rx_0.interrupt_rate |
Interrupt Rate |
dev.igc.[num].queue_rx_0.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igc.[num].queue_rx_0.rxd_head |
Receive Descriptor Head |
dev.igc.[num].queue_rx_0.rxd_tail |
Receive Descriptor Tail |
dev.igc.[num].queue_rx_1 |
RX Queue Name |
dev.igc.[num].queue_rx_1.interrupt_rate |
Interrupt Rate |
dev.igc.[num].queue_rx_1.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igc.[num].queue_rx_1.rxd_head |
Receive Descriptor Head |
dev.igc.[num].queue_rx_1.rxd_tail |
Receive Descriptor Tail |
dev.igc.[num].queue_rx_2 |
RX Queue Name |
dev.igc.[num].queue_rx_2.interrupt_rate |
Interrupt Rate |
dev.igc.[num].queue_rx_2.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igc.[num].queue_rx_2.rxd_head |
Receive Descriptor Head |
dev.igc.[num].queue_rx_2.rxd_tail |
Receive Descriptor Tail |
dev.igc.[num].queue_rx_3 |
RX Queue Name |
dev.igc.[num].queue_rx_3.interrupt_rate |
Interrupt Rate |
dev.igc.[num].queue_rx_3.rx_irq |
Queue MSI-X Receive Interrupts |
dev.igc.[num].queue_rx_3.rxd_head |
Receive Descriptor Head |
dev.igc.[num].queue_rx_3.rxd_tail |
Receive Descriptor Tail |
dev.igc.[num].queue_tx_0 |
TX Queue Name |
dev.igc.[num].queue_tx_0.interrupt_rate |
Interrupt Rate |
dev.igc.[num].queue_tx_0.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igc.[num].queue_tx_0.txd_head |
Transmit Descriptor Head |
dev.igc.[num].queue_tx_0.txd_tail |
Transmit Descriptor Tail |
dev.igc.[num].queue_tx_1 |
TX Queue Name |
dev.igc.[num].queue_tx_1.interrupt_rate |
Interrupt Rate |
dev.igc.[num].queue_tx_1.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igc.[num].queue_tx_1.txd_head |
Transmit Descriptor Head |
dev.igc.[num].queue_tx_1.txd_tail |
Transmit Descriptor Tail |
dev.igc.[num].queue_tx_2 |
TX Queue Name |
dev.igc.[num].queue_tx_2.interrupt_rate |
Interrupt Rate |
dev.igc.[num].queue_tx_2.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igc.[num].queue_tx_2.txd_head |
Transmit Descriptor Head |
dev.igc.[num].queue_tx_2.txd_tail |
Transmit Descriptor Tail |
dev.igc.[num].queue_tx_3 |
TX Queue Name |
dev.igc.[num].queue_tx_3.interrupt_rate |
Interrupt Rate |
dev.igc.[num].queue_tx_3.tx_irq |
Queue MSI-X Transmit Interrupts |
dev.igc.[num].queue_tx_3.txd_head |
Transmit Descriptor Head |
dev.igc.[num].queue_tx_3.txd_tail |
Transmit Descriptor Tail |
dev.igc.[num].reg_dump |
Dump Registers |
dev.igc.[num].rs_dump |
Dump RS indexes |
dev.igc.[num].rx_control |
Receiver Control Register |
dev.igc.[num].rx_overruns |
RX overruns |
dev.igc.[num].watchdog_timeouts |
Watchdog timeouts |
dev.iic |
|
dev.iic.%parent |
parent class |
dev.iic.[num] |
|
dev.iic.[num].%desc |
device description |
dev.iic.[num].%driver |
device driver name |
dev.iic.[num].%iommu |
iommu unit handling the device requests |
dev.iic.[num].%location |
device location relative to parent |
dev.iic.[num].%parent |
parent device |
dev.iic.[num].%pnpinfo |
device identification |
dev.iicbus |
|
dev.iicbus.%parent |
parent class |
dev.iicbus.[num] |
|
dev.iicbus.[num].%desc |
device description |
dev.iicbus.[num].%driver |
device driver name |
dev.iicbus.[num].%iommu |
iommu unit handling the device requests |
dev.iicbus.[num].%location |
device location relative to parent |
dev.iicbus.[num].%parent |
parent device |
dev.iicbus.[num].%pnpinfo |
device identification |
dev.iicbus.[num].frequency |
Bus frequency in Hz |
dev.iichid |
|
dev.iichid.%parent |
parent class |
dev.iichid.[num] |
|
dev.iichid.[num].%desc |
device description |
dev.iichid.[num].%driver |
device driver name |
dev.iichid.[num].%iommu |
iommu unit handling the device requests |
dev.iichid.[num].%location |
device location relative to parent |
dev.iichid.[num].%parent |
parent device |
dev.iichid.[num].%pnpinfo |
device identification |
dev.iichid.[num].sampling_hysteresis |
number of missing samples before enabling of slow mode |
dev.iichid.[num].sampling_rate_fast |
active sampling rate in num/second |
dev.iichid.[num].sampling_rate_slow |
idle sampling rate in num/second |
dev.intsmb |
|
dev.intsmb.%parent |
parent class |
dev.intsmb.[num] |
|
dev.intsmb.[num].%desc |
device description |
dev.intsmb.[num].%driver |
device driver name |
dev.intsmb.[num].%iommu |
iommu unit handling the device requests |
dev.intsmb.[num].%location |
device location relative to parent |
dev.intsmb.[num].%parent |
parent device |
dev.intsmb.[num].%pnpinfo |
device identification |
dev.ioapic |
|
dev.ioapic.%parent |
parent class |
dev.ioapic.[num] |
|
dev.ioapic.[num].%desc |
device description |
dev.ioapic.[num].%domain |
NUMA domain |
dev.ioapic.[num].%driver |
device driver name |
dev.ioapic.[num].%iommu |
iommu unit handling the device requests |
dev.ioapic.[num].%location |
device location relative to parent |
dev.ioapic.[num].%parent |
parent device |
dev.ioapic.[num].%pnpinfo |
device identification |
dev.ioat |
|
dev.ioat.%parent |
parent class |
dev.ioat.[num] |
|
dev.ioat.[num].%desc |
device description |
dev.ioat.[num].%domain |
NUMA domain |
dev.ioat.[num].%driver |
device driver name |
dev.ioat.[num].%iommu |
iommu unit handling the device requests |
dev.ioat.[num].%location |
device location relative to parent |
dev.ioat.[num].%parent |
parent device |
dev.ioat.[num].%pnpinfo |
device identification |
dev.ioat.[num].hammer |
Big hammers (mostly for testing) |
dev.ioat.[num].hammer.force_hw_reset |
Set to non-zero to reset the hardware |
dev.ioat.[num].intrdelay_max |
Maximum configurable INTRDELAY on this channel (microseconds) |
dev.ioat.[num].intrdelay_supported |
Is INTRDELAY supported |
dev.ioat.[num].max_xfer_size |
HW maximum transfer size |
dev.ioat.[num].state |
IOAT channel internal state |
dev.ioat.[num].state.chansts |
String of the channel status |
dev.ioat.[num].state.head |
SW descriptor head pointer index |
dev.ioat.[num].state.intrdelay |
Current INTRDELAY on this channel (cached, microseconds) |
dev.ioat.[num].state.is_submitter_processing |
submitter processing |
dev.ioat.[num].state.last_completion |
HW addr of last completion |
dev.ioat.[num].state.ring_size_order |
SW descriptor ring size order |
dev.ioat.[num].state.tail |
SW descriptor tail pointer index |
dev.ioat.[num].stats |
IOAT channel statistics |
dev.ioat.[num].stats.desc_per_interrupt |
Descriptors per interrupt |
dev.ioat.[num].stats.descriptors |
Number of descriptors processed on this channel |
dev.ioat.[num].stats.errored |
Number of descriptors failed by channel errors |
dev.ioat.[num].stats.halts |
Number of times the channel has halted |
dev.ioat.[num].stats.interrupts |
Number of interrupts processed on this channel |
dev.ioat.[num].stats.last_halt_chanerr |
The raw CHANERR when the channel was last halted |
dev.ioat.[num].stats.submitted |
Number of descriptors submitted to this channel |
dev.ioat.[num].version |
HW version (0xMM form) |
dev.isa |
|
dev.isa.%parent |
parent class |
dev.isa.[num] |
|
dev.isa.[num].%desc |
device description |
dev.isa.[num].%domain |
NUMA domain |
dev.isa.[num].%driver |
device driver name |
dev.isa.[num].%iommu |
iommu unit handling the device requests |
dev.isa.[num].%location |
device location relative to parent |
dev.isa.[num].%parent |
parent device |
dev.isa.[num].%pnpinfo |
device identification |
dev.isab |
|
dev.isab.%parent |
parent class |
dev.isab.[num] |
|
dev.isab.[num].%desc |
device description |
dev.isab.[num].%domain |
NUMA domain |
dev.isab.[num].%driver |
device driver name |
dev.isab.[num].%iommu |
iommu unit handling the device requests |
dev.isab.[num].%location |
device location relative to parent |
dev.isab.[num].%parent |
parent device |
dev.isab.[num].%pnpinfo |
device identification |
dev.iwlwifi |
|
dev.iwlwifi.%parent |
parent class |
dev.iwlwifi.[num] |
|
dev.iwlwifi.[num].%desc |
device description |
dev.iwlwifi.[num].%driver |
device driver name |
dev.iwlwifi.[num].%iommu |
iommu unit handling the device requests |
dev.iwlwifi.[num].%location |
device location relative to parent |
dev.iwlwifi.[num].%parent |
parent device |
dev.iwlwifi.[num].%pnpinfo |
device identification |
dev.ix |
|
dev.ix.%parent |
parent class |
dev.ix.[num] |
|
dev.ix.[num].%desc |
device description |
dev.ix.[num].%domain |
NUMA domain |
dev.ix.[num].%driver |
device driver name |
dev.ix.[num].%iommu |
iommu unit handling the device requests |
dev.ix.[num].%location |
device location relative to parent |
dev.ix.[num].%parent |
parent device |
dev.ix.[num].%pnpinfo |
device identification |
dev.ix.[num].advertise_speed |
Control advertised link speed using these flags:
0x1 - advertise 100M
0x2 - advertise 1G
0x4 - advertise 10G
0x8 - advertise 10M
0x10 - advertise 2.5G
0x20 - advertise 5G
100M and 10M are only supported on certain adapters. |
dev.ix.[num].dmac |
DMA Coalesce |
dev.ix.[num].dropped |
Driver dropped packets |
dev.ix.[num].enable_aim |
Interrupt Moderation |
dev.ix.[num].fc |
Set flow control mode using these values:
0 - off
1 - rx pause
2 - tx pause
3 - tx and rx pause |
dev.ix.[num].fw_version |
Prints FW/NVM Versions |
dev.ix.[num].iflib |
IFLIB fields |
dev.ix.[num].iflib.allocated_msix_vectors |
total # of MSI-X vectors allocated by driver |
dev.ix.[num].iflib.core_offset |
offset to start using cores at |
dev.ix.[num].iflib.disable_msix |
disable MSI-X (default 0) |
dev.ix.[num].iflib.driver_version |
driver version |
dev.ix.[num].iflib.override_nrxds |
list of # of RX descriptors to use, 0 = use default # |
dev.ix.[num].iflib.override_nrxqs |
# of rxqs to use, 0 => use default # |
dev.ix.[num].iflib.override_ntxds |
list of # of TX descriptors to use, 0 = use default # |
dev.ix.[num].iflib.override_ntxqs |
# of txqs to use, 0 => use default # |
dev.ix.[num].iflib.override_qs_enable |
permit #txq != #rxq |
dev.ix.[num].iflib.rx_budget |
set the RX budget |
dev.ix.[num].iflib.rxq0 |
Queue Name |
dev.ix.[num].iflib.rxq0.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq0.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq0.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq0.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq0.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq0.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq00 |
Queue Name |
dev.ix.[num].iflib.rxq00.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq00.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq00.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq00.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq00.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq00.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq01 |
Queue Name |
dev.ix.[num].iflib.rxq01.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq01.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq01.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq01.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq01.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq01.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq02 |
Queue Name |
dev.ix.[num].iflib.rxq02.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq02.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq02.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq02.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq02.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq02.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq03 |
Queue Name |
dev.ix.[num].iflib.rxq03.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq03.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq03.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq03.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq03.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq03.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq04 |
Queue Name |
dev.ix.[num].iflib.rxq04.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq04.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq04.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq04.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq04.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq04.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq05 |
Queue Name |
dev.ix.[num].iflib.rxq05.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq05.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq05.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq05.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq05.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq05.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq06 |
Queue Name |
dev.ix.[num].iflib.rxq06.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq06.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq06.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq06.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq06.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq06.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq07 |
Queue Name |
dev.ix.[num].iflib.rxq07.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq07.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq07.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq07.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq07.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq07.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq08 |
Queue Name |
dev.ix.[num].iflib.rxq08.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq08.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq08.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq08.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq08.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq08.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq09 |
Queue Name |
dev.ix.[num].iflib.rxq09.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq09.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq09.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq09.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq09.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq09.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq1 |
Queue Name |
dev.ix.[num].iflib.rxq1.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq1.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq1.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq1.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq1.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq1.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq10 |
Queue Name |
dev.ix.[num].iflib.rxq10.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq10.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq10.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq10.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq10.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq10.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq11 |
Queue Name |
dev.ix.[num].iflib.rxq11.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq11.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq11.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq11.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq11.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq11.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq12 |
Queue Name |
dev.ix.[num].iflib.rxq12.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq12.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq12.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq12.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq12.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq12.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq13 |
Queue Name |
dev.ix.[num].iflib.rxq13.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq13.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq13.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq13.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq13.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq13.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq14 |
Queue Name |
dev.ix.[num].iflib.rxq14.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq14.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq14.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq14.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq14.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq14.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq15 |
Queue Name |
dev.ix.[num].iflib.rxq15.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq15.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq15.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq15.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq15.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq15.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq2 |
Queue Name |
dev.ix.[num].iflib.rxq2.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq2.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq2.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq2.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq2.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq2.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.rxq3 |
Queue Name |
dev.ix.[num].iflib.rxq3.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.rxq3.rxq_fl0 |
freelist Name |
dev.ix.[num].iflib.rxq3.rxq_fl0.buf_size |
buffer size |
dev.ix.[num].iflib.rxq3.rxq_fl0.cidx |
Consumer Index |
dev.ix.[num].iflib.rxq3.rxq_fl0.credits |
credits available |
dev.ix.[num].iflib.rxq3.rxq_fl0.pidx |
Producer Index |
dev.ix.[num].iflib.separate_txrx |
use separate cores for TX and RX |
dev.ix.[num].iflib.tx_abdicate |
cause TX to abdicate instead of running to completion |
dev.ix.[num].iflib.txq0 |
Queue Name |
dev.ix.[num].iflib.txq0.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq0.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq0.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq0.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq0.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq0.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq0.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq0.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq0.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq0.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq0.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq0.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq0.ring_state |
soft ring state |
dev.ix.[num].iflib.txq0.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq0.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq0.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq0.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq0.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq0.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq0.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq0.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq00 |
Queue Name |
dev.ix.[num].iflib.txq00.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq00.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq00.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq00.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq00.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq00.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq00.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq00.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq00.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq00.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq00.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq00.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq00.ring_state |
soft ring state |
dev.ix.[num].iflib.txq00.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq00.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq00.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq00.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq00.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq00.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq00.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq00.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq01 |
Queue Name |
dev.ix.[num].iflib.txq01.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq01.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq01.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq01.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq01.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq01.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq01.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq01.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq01.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq01.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq01.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq01.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq01.ring_state |
soft ring state |
dev.ix.[num].iflib.txq01.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq01.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq01.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq01.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq01.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq01.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq01.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq01.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq02 |
Queue Name |
dev.ix.[num].iflib.txq02.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq02.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq02.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq02.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq02.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq02.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq02.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq02.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq02.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq02.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq02.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq02.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq02.ring_state |
soft ring state |
dev.ix.[num].iflib.txq02.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq02.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq02.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq02.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq02.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq02.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq02.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq02.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq03 |
Queue Name |
dev.ix.[num].iflib.txq03.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq03.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq03.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq03.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq03.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq03.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq03.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq03.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq03.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq03.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq03.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq03.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq03.ring_state |
soft ring state |
dev.ix.[num].iflib.txq03.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq03.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq03.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq03.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq03.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq03.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq03.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq03.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq04 |
Queue Name |
dev.ix.[num].iflib.txq04.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq04.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq04.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq04.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq04.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq04.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq04.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq04.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq04.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq04.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq04.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq04.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq04.ring_state |
soft ring state |
dev.ix.[num].iflib.txq04.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq04.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq04.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq04.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq04.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq04.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq04.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq04.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq05 |
Queue Name |
dev.ix.[num].iflib.txq05.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq05.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq05.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq05.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq05.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq05.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq05.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq05.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq05.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq05.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq05.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq05.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq05.ring_state |
soft ring state |
dev.ix.[num].iflib.txq05.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq05.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq05.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq05.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq05.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq05.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq05.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq05.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq06 |
Queue Name |
dev.ix.[num].iflib.txq06.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq06.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq06.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq06.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq06.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq06.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq06.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq06.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq06.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq06.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq06.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq06.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq06.ring_state |
soft ring state |
dev.ix.[num].iflib.txq06.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq06.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq06.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq06.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq06.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq06.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq06.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq06.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq07 |
Queue Name |
dev.ix.[num].iflib.txq07.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq07.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq07.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq07.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq07.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq07.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq07.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq07.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq07.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq07.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq07.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq07.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq07.ring_state |
soft ring state |
dev.ix.[num].iflib.txq07.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq07.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq07.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq07.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq07.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq07.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq07.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq07.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq08 |
Queue Name |
dev.ix.[num].iflib.txq08.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq08.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq08.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq08.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq08.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq08.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq08.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq08.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq08.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq08.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq08.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq08.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq08.ring_state |
soft ring state |
dev.ix.[num].iflib.txq08.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq08.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq08.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq08.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq08.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq08.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq08.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq08.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq09 |
Queue Name |
dev.ix.[num].iflib.txq09.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq09.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq09.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq09.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq09.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq09.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq09.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq09.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq09.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq09.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq09.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq09.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq09.ring_state |
soft ring state |
dev.ix.[num].iflib.txq09.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq09.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq09.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq09.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq09.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq09.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq09.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq09.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq1 |
Queue Name |
dev.ix.[num].iflib.txq1.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq1.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq1.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq1.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq1.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq1.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq1.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq1.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq1.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq1.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq1.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq1.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq1.ring_state |
soft ring state |
dev.ix.[num].iflib.txq1.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq1.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq1.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq1.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq1.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq1.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq1.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq1.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq10 |
Queue Name |
dev.ix.[num].iflib.txq10.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq10.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq10.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq10.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq10.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq10.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq10.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq10.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq10.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq10.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq10.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq10.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq10.ring_state |
soft ring state |
dev.ix.[num].iflib.txq10.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq10.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq10.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq10.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq10.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq10.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq10.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq10.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq11 |
Queue Name |
dev.ix.[num].iflib.txq11.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq11.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq11.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq11.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq11.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq11.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq11.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq11.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq11.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq11.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq11.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq11.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq11.ring_state |
soft ring state |
dev.ix.[num].iflib.txq11.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq11.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq11.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq11.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq11.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq11.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq11.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq11.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq12 |
Queue Name |
dev.ix.[num].iflib.txq12.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq12.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq12.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq12.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq12.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq12.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq12.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq12.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq12.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq12.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq12.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq12.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq12.ring_state |
soft ring state |
dev.ix.[num].iflib.txq12.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq12.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq12.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq12.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq12.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq12.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq12.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq12.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq13 |
Queue Name |
dev.ix.[num].iflib.txq13.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq13.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq13.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq13.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq13.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq13.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq13.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq13.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq13.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq13.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq13.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq13.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq13.ring_state |
soft ring state |
dev.ix.[num].iflib.txq13.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq13.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq13.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq13.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq13.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq13.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq13.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq13.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq14 |
Queue Name |
dev.ix.[num].iflib.txq14.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq14.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq14.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq14.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq14.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq14.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq14.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq14.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq14.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq14.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq14.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq14.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq14.ring_state |
soft ring state |
dev.ix.[num].iflib.txq14.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq14.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq14.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq14.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq14.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq14.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq14.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq14.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq15 |
Queue Name |
dev.ix.[num].iflib.txq15.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq15.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq15.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq15.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq15.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq15.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq15.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq15.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq15.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq15.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq15.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq15.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq15.ring_state |
soft ring state |
dev.ix.[num].iflib.txq15.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq15.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq15.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq15.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq15.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq15.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq15.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq15.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq2 |
Queue Name |
dev.ix.[num].iflib.txq2.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq2.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq2.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq2.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq2.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq2.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq2.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq2.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq2.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq2.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq2.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq2.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq2.ring_state |
soft ring state |
dev.ix.[num].iflib.txq2.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq2.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq2.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq2.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq2.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq2.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq2.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq2.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.txq3 |
Queue Name |
dev.ix.[num].iflib.txq3.cpu |
cpu this queue is bound to |
dev.ix.[num].iflib.txq3.m_pullups |
# of times m_pullup was called |
dev.ix.[num].iflib.txq3.mbuf_defrag |
# of times m_defrag was called |
dev.ix.[num].iflib.txq3.mbuf_defrag_failed |
# of times m_defrag failed |
dev.ix.[num].iflib.txq3.no_desc_avail |
# of times no descriptors were available |
dev.ix.[num].iflib.txq3.no_tx_dma_setup |
# of times map failed for other than EFBIG |
dev.ix.[num].iflib.txq3.r_abdications |
# of consumer abdications in the mp_ring for this queue |
dev.ix.[num].iflib.txq3.r_drops |
# of drops in the mp_ring for this queue |
dev.ix.[num].iflib.txq3.r_enqueues |
# of enqueues to the mp_ring for this queue |
dev.ix.[num].iflib.txq3.r_restarts |
# of consumer restarts in the mp_ring for this queue |
dev.ix.[num].iflib.txq3.r_stalls |
# of consumer stalls in the mp_ring for this queue |
dev.ix.[num].iflib.txq3.r_starts |
# of normal consumer starts in mp_ring for this queue |
dev.ix.[num].iflib.txq3.ring_state |
soft ring state |
dev.ix.[num].iflib.txq3.tx_map_failed |
# of times DMA map failed |
dev.ix.[num].iflib.txq3.txd_encap_efbig |
# of times txd_encap returned EFBIG |
dev.ix.[num].iflib.txq3.txq_cidx |
Consumer Index |
dev.ix.[num].iflib.txq3.txq_cidx_processed |
Consumer Index seen by credit update |
dev.ix.[num].iflib.txq3.txq_cleaned |
total cleaned |
dev.ix.[num].iflib.txq3.txq_in_use |
descriptors in use |
dev.ix.[num].iflib.txq3.txq_pidx |
Producer Index |
dev.ix.[num].iflib.txq3.txq_processed |
descriptors procesed for clean |
dev.ix.[num].iflib.use_extra_msix_vectors |
attempt to reserve the given number of extra MSI-X vectors during driver load for the creation of additional interfaces later |
dev.ix.[num].iflib.use_logical_cores |
try to make use of logical cores for TX and RX |
dev.ix.[num].link_irq |
Link MSI-X IRQ Handled |
dev.ix.[num].mac_stats |
MAC Statistics |
dev.ix.[num].mac_stats.bcast_pkts_rcvd |
Broadcast Packets Received |
dev.ix.[num].mac_stats.bcast_pkts_txd |
Broadcast Packets Transmitted |
dev.ix.[num].mac_stats.byte_errs |
Byte Errors |
dev.ix.[num].mac_stats.checksum_errs |
Checksum Errors |
dev.ix.[num].mac_stats.crc_errs |
CRC Errors |
dev.ix.[num].mac_stats.good_octets_rcvd |
Good Octets Received |
dev.ix.[num].mac_stats.good_octets_txd |
Good Octets Transmitted |
dev.ix.[num].mac_stats.good_pkts_rcvd |
Good Packets Received |
dev.ix.[num].mac_stats.good_pkts_txd |
Good Packets Transmitted |
dev.ix.[num].mac_stats.ill_errs |
Illegal Byte Errors |
dev.ix.[num].mac_stats.local_faults |
MAC Local Faults |
dev.ix.[num].mac_stats.management_pkts_drpd |
Management Packets Dropped |
dev.ix.[num].mac_stats.management_pkts_rcvd |
Management Packets Received |
dev.ix.[num].mac_stats.management_pkts_txd |
Management Packets Transmitted |
dev.ix.[num].mac_stats.mcast_pkts_rcvd |
Multicast Packets Received |
dev.ix.[num].mac_stats.mcast_pkts_txd |
Multicast Packets Transmitted |
dev.ix.[num].mac_stats.rec_len_errs |
Receive Length Errors |
dev.ix.[num].mac_stats.recv_fragmented |
Fragmented Packets Received |
dev.ix.[num].mac_stats.recv_jabberd |
Received Jabber |
dev.ix.[num].mac_stats.recv_oversized |
Oversized Packets Received |
dev.ix.[num].mac_stats.recv_undersized |
Receive Undersized |
dev.ix.[num].mac_stats.remote_faults |
MAC Remote Faults |
dev.ix.[num].mac_stats.rx_errs |
Sum of the following RX errors counters:
* CRC errors,
* illegal byte error count,
* missed packet count,
* length error count,
* undersized packets count,
* fragmented packets count,
* oversized packets count,
* jabber count. |
dev.ix.[num].mac_stats.rx_frames_1024_1522 |
1023-1522 byte frames received |
dev.ix.[num].mac_stats.rx_frames_128_255 |
128-255 byte frames received |
dev.ix.[num].mac_stats.rx_frames_256_511 |
256-511 byte frames received |
dev.ix.[num].mac_stats.rx_frames_512_1023 |
512-1023 byte frames received |
dev.ix.[num].mac_stats.rx_frames_64 |
64 byte frames received |
dev.ix.[num].mac_stats.rx_frames_65_127 |
65-127 byte frames received |
dev.ix.[num].mac_stats.rx_missed_packets |
RX Missed Packet Count |
dev.ix.[num].mac_stats.short_discards |
MAC Short Packets Discarded |
dev.ix.[num].mac_stats.total_octets_rcvd |
Total Octets Received |
dev.ix.[num].mac_stats.total_pkts_rcvd |
Total Packets Received |
dev.ix.[num].mac_stats.total_pkts_txd |
Total Packets Transmitted |
dev.ix.[num].mac_stats.tx_frames_1024_1522 |
1024-1522 byte frames transmitted |
dev.ix.[num].mac_stats.tx_frames_128_255 |
128-255 byte frames transmitted |
dev.ix.[num].mac_stats.tx_frames_256_511 |
256-511 byte frames transmitted |
dev.ix.[num].mac_stats.tx_frames_512_1023 |
512-1023 byte frames transmitted |
dev.ix.[num].mac_stats.tx_frames_64 |
64 byte frames transmitted |
dev.ix.[num].mac_stats.tx_frames_65_127 |
65-127 byte frames transmitted |
dev.ix.[num].mac_stats.xoff_recvd |
Link XOFF Received |
dev.ix.[num].mac_stats.xoff_txd |
Link XOFF Transmitted |
dev.ix.[num].mac_stats.xon_recvd |
Link XON Received |
dev.ix.[num].mac_stats.xon_txd |
Link XON Transmitted |
dev.ix.[num].queue0 |
Queue Name |
dev.ix.[num].queue0.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue0.irqs |
irqs on this queue |
dev.ix.[num].queue0.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue0.rx_copies |
Copied RX Frames |
dev.ix.[num].queue0.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue0.rx_packets |
Queue Packets Received |
dev.ix.[num].queue0.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue0.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue0.tso_tx |
TSO |
dev.ix.[num].queue0.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue0.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue0.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue1 |
Queue Name |
dev.ix.[num].queue1.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue1.irqs |
irqs on this queue |
dev.ix.[num].queue1.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue1.rx_copies |
Copied RX Frames |
dev.ix.[num].queue1.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue1.rx_packets |
Queue Packets Received |
dev.ix.[num].queue1.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue1.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue1.tso_tx |
TSO |
dev.ix.[num].queue1.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue1.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue1.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue10 |
Queue Name |
dev.ix.[num].queue10.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue10.irqs |
irqs on this queue |
dev.ix.[num].queue10.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue10.rx_copies |
Copied RX Frames |
dev.ix.[num].queue10.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue10.rx_packets |
Queue Packets Received |
dev.ix.[num].queue10.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue10.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue10.tso_tx |
TSO |
dev.ix.[num].queue10.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue10.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue10.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue11 |
Queue Name |
dev.ix.[num].queue11.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue11.irqs |
irqs on this queue |
dev.ix.[num].queue11.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue11.rx_copies |
Copied RX Frames |
dev.ix.[num].queue11.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue11.rx_packets |
Queue Packets Received |
dev.ix.[num].queue11.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue11.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue11.tso_tx |
TSO |
dev.ix.[num].queue11.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue11.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue11.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue12 |
Queue Name |
dev.ix.[num].queue12.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue12.irqs |
irqs on this queue |
dev.ix.[num].queue12.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue12.rx_copies |
Copied RX Frames |
dev.ix.[num].queue12.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue12.rx_packets |
Queue Packets Received |
dev.ix.[num].queue12.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue12.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue12.tso_tx |
TSO |
dev.ix.[num].queue12.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue12.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue12.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue13 |
Queue Name |
dev.ix.[num].queue13.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue13.irqs |
irqs on this queue |
dev.ix.[num].queue13.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue13.rx_copies |
Copied RX Frames |
dev.ix.[num].queue13.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue13.rx_packets |
Queue Packets Received |
dev.ix.[num].queue13.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue13.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue13.tso_tx |
TSO |
dev.ix.[num].queue13.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue13.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue13.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue14 |
Queue Name |
dev.ix.[num].queue14.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue14.irqs |
irqs on this queue |
dev.ix.[num].queue14.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue14.rx_copies |
Copied RX Frames |
dev.ix.[num].queue14.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue14.rx_packets |
Queue Packets Received |
dev.ix.[num].queue14.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue14.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue14.tso_tx |
TSO |
dev.ix.[num].queue14.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue14.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue14.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue15 |
Queue Name |
dev.ix.[num].queue15.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue15.irqs |
irqs on this queue |
dev.ix.[num].queue15.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue15.rx_copies |
Copied RX Frames |
dev.ix.[num].queue15.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue15.rx_packets |
Queue Packets Received |
dev.ix.[num].queue15.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue15.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue15.tso_tx |
TSO |
dev.ix.[num].queue15.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue15.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue15.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue2 |
Queue Name |
dev.ix.[num].queue2.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue2.irqs |
irqs on this queue |
dev.ix.[num].queue2.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue2.rx_copies |
Copied RX Frames |
dev.ix.[num].queue2.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue2.rx_packets |
Queue Packets Received |
dev.ix.[num].queue2.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue2.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue2.tso_tx |
TSO |
dev.ix.[num].queue2.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue2.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue2.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue3 |
Queue Name |
dev.ix.[num].queue3.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue3.irqs |
irqs on this queue |
dev.ix.[num].queue3.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue3.rx_copies |
Copied RX Frames |
dev.ix.[num].queue3.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue3.rx_packets |
Queue Packets Received |
dev.ix.[num].queue3.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue3.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue3.tso_tx |
TSO |
dev.ix.[num].queue3.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue3.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue3.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue4 |
Queue Name |
dev.ix.[num].queue4.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue4.irqs |
irqs on this queue |
dev.ix.[num].queue4.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue4.rx_copies |
Copied RX Frames |
dev.ix.[num].queue4.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue4.rx_packets |
Queue Packets Received |
dev.ix.[num].queue4.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue4.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue4.tso_tx |
TSO |
dev.ix.[num].queue4.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue4.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue4.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue5 |
Queue Name |
dev.ix.[num].queue5.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue5.irqs |
irqs on this queue |
dev.ix.[num].queue5.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue5.rx_copies |
Copied RX Frames |
dev.ix.[num].queue5.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue5.rx_packets |
Queue Packets Received |
dev.ix.[num].queue5.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue5.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue5.tso_tx |
TSO |
dev.ix.[num].queue5.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue5.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue5.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue6 |
Queue Name |
dev.ix.[num].queue6.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue6.irqs |
irqs on this queue |
dev.ix.[num].queue6.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue6.rx_copies |
Copied RX Frames |
dev.ix.[num].queue6.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue6.rx_packets |
Queue Packets Received |
dev.ix.[num].queue6.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue6.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue6.tso_tx |
TSO |
dev.ix.[num].queue6.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue6.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue6.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue7 |
Queue Name |
dev.ix.[num].queue7.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue7.irqs |
irqs on this queue |
dev.ix.[num].queue7.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue7.rx_copies |
Copied RX Frames |
dev.ix.[num].queue7.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue7.rx_packets |
Queue Packets Received |
dev.ix.[num].queue7.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue7.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue7.tso_tx |
TSO |
dev.ix.[num].queue7.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue7.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue7.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue8 |
Queue Name |
dev.ix.[num].queue8.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue8.irqs |
irqs on this queue |
dev.ix.[num].queue8.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue8.rx_copies |
Copied RX Frames |
dev.ix.[num].queue8.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue8.rx_packets |
Queue Packets Received |
dev.ix.[num].queue8.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue8.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue8.tso_tx |
TSO |
dev.ix.[num].queue8.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue8.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue8.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].queue9 |
Queue Name |
dev.ix.[num].queue9.interrupt_rate |
Interrupt Rate |
dev.ix.[num].queue9.irqs |
irqs on this queue |
dev.ix.[num].queue9.rx_bytes |
Queue Bytes Received |
dev.ix.[num].queue9.rx_copies |
Copied RX Frames |
dev.ix.[num].queue9.rx_discarded |
Discarded RX packets |
dev.ix.[num].queue9.rx_packets |
Queue Packets Received |
dev.ix.[num].queue9.rxd_head |
Receive Descriptor Head |
dev.ix.[num].queue9.rxd_tail |
Receive Descriptor Tail |
dev.ix.[num].queue9.tso_tx |
TSO |
dev.ix.[num].queue9.tx_packets |
Queue Packets Transmitted |
dev.ix.[num].queue9.txd_head |
Transmit Descriptor Head |
dev.ix.[num].queue9.txd_tail |
Transmit Descriptor Tail |
dev.ix.[num].wake |
Device set to wake the system |
dev.ix.[num].watchdog_events |
Watchdog timeouts |
dev.kvmclock |
|
dev.kvmclock.%parent |
parent class |
dev.kvmclock.[num] |
|
dev.kvmclock.[num].%desc |
device description |
dev.kvmclock.[num].%driver |
device driver name |
dev.kvmclock.[num].%iommu |
iommu unit handling the device requests |
dev.kvmclock.[num].%location |
device location relative to parent |
dev.kvmclock.[num].%parent |
parent device |
dev.kvmclock.[num].%pnpinfo |
device identification |
dev.kvmclock.[num].tsc_freq |
Time Stamp Counter frequency |
dev.kvmclock.[num].vdso_enable_without_rdtscp |
Allow the use of a vDSO when rdtscp is not available |
dev.kvmclock.[num].vdso_force_unstable |
Forcibly deassert stable flag in vDSO codepath |
dev.lkpi_iic |
|
dev.lkpi_iic.%parent |
parent class |
dev.lkpi_iic.[num] |
|
dev.lkpi_iic.[num].%desc |
device description |
dev.lkpi_iic.[num].%driver |
device driver name |
dev.lkpi_iic.[num].%iommu |
iommu unit handling the device requests |
dev.lkpi_iic.[num].%location |
device location relative to parent |
dev.lkpi_iic.[num].%parent |
parent device |
dev.lkpi_iic.[num].%pnpinfo |
device identification |
dev.miibus |
|
dev.miibus.%parent |
parent class |
dev.miibus.[num] |
|
dev.miibus.[num].%desc |
device description |
dev.miibus.[num].%driver |
device driver name |
dev.miibus.[num].%iommu |
iommu unit handling the device requests |
dev.miibus.[num].%location |
device location relative to parent |
dev.miibus.[num].%parent |
parent device |
dev.miibus.[num].%pnpinfo |
device identification |
dev.mps |
|
dev.mps.%parent |
parent class |
dev.mps.[num] |
|
dev.mps.[num].%desc |
device description |
dev.mps.[num].%domain |
NUMA domain |
dev.mps.[num].%driver |
device driver name |
dev.mps.[num].%iommu |
iommu unit handling the device requests |
dev.mps.[num].%location |
device location relative to parent |
dev.mps.[num].%parent |
parent device |
dev.mps.[num].%pnpinfo |
device identification |
dev.mps.[num].chain_alloc_fail |
chain allocation failures |
dev.mps.[num].chain_free |
number of free chain elements |
dev.mps.[num].chain_free_lowwater |
lowest number of free chain elements |
dev.mps.[num].debug_level |
mps debug level |
dev.mps.[num].disable_msi |
Disable the use of MSI interrupts |
dev.mps.[num].disable_msix |
Disable the use of MSI-X interrupts |
dev.mps.[num].driver_version |
driver version |
dev.mps.[num].dump_reqs |
Dump Active Requests |
dev.mps.[num].dump_reqs_alltypes |
dump all request types not just inqueue |
dev.mps.[num].enable_ssu |
enable SSU to SATA SSD/HDD at shutdown |
dev.mps.[num].encl_table_dump |
Enclosure Table Dump |
dev.mps.[num].firmware_version |
firmware version |
dev.mps.[num].io_cmds_active |
number of currently active commands |
dev.mps.[num].io_cmds_highwater |
maximum active commands seen |
dev.mps.[num].mapping_table_dump |
Mapping Table Dump |
dev.mps.[num].max_chains |
maximum chain frames that will be allocated |
dev.mps.[num].max_evtframes |
Total number of event frames allocated |
dev.mps.[num].max_io_pages |
maximum pages to allow per I/O (if <1 use IOCFacts) |
dev.mps.[num].max_msix |
User-defined maximum number of MSIX queues |
dev.mps.[num].max_prireqframes |
Total number of allocated high priority request frames |
dev.mps.[num].max_replyframes |
Total number of allocated reply frames |
dev.mps.[num].max_reqframes |
Total number of allocated request frames |
dev.mps.[num].msg_version |
message interface version (deprecated) |
dev.mps.[num].msix_msgs |
Negotiated number of MSIX queues |
dev.mps.[num].spinup_wait_time |
seconds to wait for spinup after SATA ID error |
dev.mps.[num].use_phy_num |
Use the phy number for enumeration |
dev.mrsas |
|
dev.mrsas.%parent |
parent class |
dev.mrsas.[num] |
|
dev.mrsas.[num].%desc |
device description |
dev.mrsas.[num].%domain |
NUMA domain |
dev.mrsas.[num].%driver |
device driver name |
dev.mrsas.[num].%iommu |
iommu unit handling the device requests |
dev.mrsas.[num].%location |
device location relative to parent |
dev.mrsas.[num].%parent |
parent device |
dev.mrsas.[num].%pnpinfo |
device identification |
dev.mrsas.[num].SGE holes |
Number of IOs with holes in SGEs |
dev.mrsas.[num].block_sync_cache |
Block SYNC CACHE at driver. <default: 0, send it to FW> |
dev.mrsas.[num].disable_ocr |
Disable the use of OCR |
dev.mrsas.[num].driver_version |
driver version |
dev.mrsas.[num].fw_outstanding |
FW outstanding commands |
dev.mrsas.[num].io_cmds_highwater |
Max FW outstanding commands |
dev.mrsas.[num].mrsas_debug |
Driver debug level |
dev.mrsas.[num].mrsas_fw_fault_check_delay |
FW fault check thread delay in seconds. <default is 1 sec> |
dev.mrsas.[num].mrsas_io_timeout |
Driver IO timeout value in mili-second. |
dev.mrsas.[num].prp_count |
Number of IOs for which PRPs are built |
dev.mrsas.[num].reset_count |
number of ocr from start of the day |
dev.mrsas.[num].reset_in_progress |
ocr in progress status |
dev.mrsas.[num].stream detection |
Disable/Enable Stream detection. <default: 1, Enable Stream Detection> |
dev.netmap |
Netmap args |
dev.netmap.admode |
Adapter mode. 0 selects the best option available,1 forces native adapter, 2 forces emulated adapter |
dev.netmap.bridge_batch |
Max batch size to be used in the bridge |
dev.netmap.buf_curr_num |
Current number of netmap bufs |
dev.netmap.buf_curr_size |
Current size of netmap bufs |
dev.netmap.buf_num |
Requested number of netmap bufs |
dev.netmap.buf_size |
Requested size of netmap bufs |
dev.netmap.default_pipes |
For compatibility only |
dev.netmap.fwd |
Force NR_FORWARD mode |
dev.netmap.generic_hwcsum |
Hardware checksums. 0 to disable checksum generation by the NIC (default),1 to enable checksum generation by the NIC |
dev.netmap.generic_mit |
RX notification interval in nanoseconds |
dev.netmap.generic_rings |
Number of TX/RX queues for emulated netmap adapters |
dev.netmap.generic_ringsize |
Number of per-ring slots for emulated netmap mode |
dev.netmap.if_curr_num |
Current number of netmap ifs |
dev.netmap.if_curr_size |
Current size of netmap ifs |
dev.netmap.if_num |
Requested number of netmap ifs |
dev.netmap.if_size |
Requested size of netmap ifs |
dev.netmap.iflib_crcstrip |
strip CRC on RX frames |
dev.netmap.iflib_rx_miss |
potentially missed RX intr |
dev.netmap.iflib_rx_miss_bufs |
potentially missed RX intr bufs |
dev.netmap.max_bridges |
Max number of vale bridges |
dev.netmap.no_pendintr |
Always look for new received packets. |
dev.netmap.no_timestamp |
no_timestamp |
dev.netmap.port_numa_affinity |
Use NUMA-local memory for memory pools when possible |
dev.netmap.priv_buf_num |
Default number of private netmap bufs |
dev.netmap.priv_buf_size |
Default size of private netmap bufs |
dev.netmap.priv_if_num |
Default number of private netmap ifs |
dev.netmap.priv_if_size |
Default size of private netmap ifs |
dev.netmap.priv_ring_num |
Default number of private netmap rings |
dev.netmap.priv_ring_size |
Default size of private netmap rings |
dev.netmap.ptnet_vnet_hdr |
Allow ptnet devices to use virtio-net headers |
dev.netmap.ring_curr_num |
Current number of netmap rings |
dev.netmap.ring_curr_size |
Current size of netmap rings |
dev.netmap.ring_num |
Requested number of netmap rings |
dev.netmap.ring_size |
Requested size of netmap rings |
dev.netmap.txsync_retry |
Number of txsync loops in bridge's flush. |
dev.netmap.verbose |
Verbose mode |
dev.nexus |
|
dev.nexus.%parent |
parent class |
dev.nexus.[num] |
|
dev.nexus.[num].%desc |
device description |
dev.nexus.[num].%driver |
device driver name |
dev.nexus.[num].%iommu |
iommu unit handling the device requests |
dev.nexus.[num].%location |
device location relative to parent |
dev.nexus.[num].%parent |
parent device |
dev.nexus.[num].%pnpinfo |
device identification |
dev.nvidia |
|
dev.nvidia.%parent |
parent class |
dev.nvidia.[num] |
|
dev.nvidia.[num].%desc |
device description |
dev.nvidia.[num].%driver |
device driver name |
dev.nvidia.[num].%iommu |
iommu unit handling the device requests |
dev.nvidia.[num].%location |
device location relative to parent |
dev.nvidia.[num].%parent |
parent device |
dev.nvidia.[num].%pnpinfo |
device identification |
dev.nvme |
|
dev.nvme.%parent |
parent class |
dev.nvme.[num] |
|
dev.nvme.[num].%desc |
device description |
dev.nvme.[num].%domain |
NUMA domain |
dev.nvme.[num].%driver |
device driver name |
dev.nvme.[num].%iommu |
iommu unit handling the device requests |
dev.nvme.[num].%location |
device location relative to parent |
dev.nvme.[num].%parent |
parent device |
dev.nvme.[num].%pnpinfo |
device identification |
dev.nvme.[num].admin_timeout_period |
Timeout period for Admin queue (in seconds) |
dev.nvme.[num].adminq |
Admin Queue |
dev.nvme.[num].adminq.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].adminq.dump_debug |
Dump debug data |
dev.nvme.[num].adminq.num_cmds |
Number of commands submitted |
dev.nvme.[num].adminq.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].adminq.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].adminq.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].adminq.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].adminq.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].adminq.num_retries |
Number of commands retried |
dev.nvme.[num].adminq.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].adminq.recovery |
Current recovery state of the queue |
dev.nvme.[num].adminq.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].adminq.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].alignment_splits |
Number of times we split the I/O alignment for drives with preferred alignment |
dev.nvme.[num].cap_hi |
Hi 32-bits of capacities for the drive |
dev.nvme.[num].cap_lo |
Low 32-bits of capacities for the drive |
dev.nvme.[num].fail_on_reset |
Pretend the next reset fails and fail the controller |
dev.nvme.[num].int_coal_threshold |
Interrupt coalescing threshold |
dev.nvme.[num].int_coal_time |
Interrupt coalescing timeout (in microseconds) |
dev.nvme.[num].ioq |
I/O Queues |
dev.nvme.[num].ioq.[num] |
IO Queue |
dev.nvme.[num].ioq.[num].cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq.[num].dump_debug |
Dump debug data |
dev.nvme.[num].ioq.[num].num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq.[num].num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq.[num].num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq.[num].num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq.[num].num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq.[num].num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq.[num].num_retries |
Number of commands retried |
dev.nvme.[num].ioq.[num].num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq.[num].recovery |
Current recovery state of the queue |
dev.nvme.[num].ioq.[num].sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq.[num].sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq0 |
IO Queue |
dev.nvme.[num].ioq0.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq0.dump_debug |
Dump debug data |
dev.nvme.[num].ioq0.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq0.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq0.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq0.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq0.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq0.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq0.num_retries |
Number of commands retried |
dev.nvme.[num].ioq0.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq0.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq0.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq1 |
IO Queue |
dev.nvme.[num].ioq1.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq1.dump_debug |
Dump debug data |
dev.nvme.[num].ioq1.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq1.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq1.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq1.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq1.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq1.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq1.num_retries |
Number of commands retried |
dev.nvme.[num].ioq1.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq1.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq1.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq10 |
IO Queue |
dev.nvme.[num].ioq10.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq10.dump_debug |
Dump debug data |
dev.nvme.[num].ioq10.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq10.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq10.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq10.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq10.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq10.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq10.num_retries |
Number of commands retried |
dev.nvme.[num].ioq10.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq10.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq10.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq11 |
IO Queue |
dev.nvme.[num].ioq11.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq11.dump_debug |
Dump debug data |
dev.nvme.[num].ioq11.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq11.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq11.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq11.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq11.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq11.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq11.num_retries |
Number of commands retried |
dev.nvme.[num].ioq11.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq11.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq11.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq12 |
IO Queue |
dev.nvme.[num].ioq12.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq12.dump_debug |
Dump debug data |
dev.nvme.[num].ioq12.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq12.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq12.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq12.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq12.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq12.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq12.num_retries |
Number of commands retried |
dev.nvme.[num].ioq12.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq12.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq12.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq13 |
IO Queue |
dev.nvme.[num].ioq13.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq13.dump_debug |
Dump debug data |
dev.nvme.[num].ioq13.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq13.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq13.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq13.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq13.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq13.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq13.num_retries |
Number of commands retried |
dev.nvme.[num].ioq13.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq13.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq13.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq14 |
IO Queue |
dev.nvme.[num].ioq14.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq14.dump_debug |
Dump debug data |
dev.nvme.[num].ioq14.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq14.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq14.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq14.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq14.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq14.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq14.num_retries |
Number of commands retried |
dev.nvme.[num].ioq14.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq14.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq14.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq15 |
IO Queue |
dev.nvme.[num].ioq15.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq15.dump_debug |
Dump debug data |
dev.nvme.[num].ioq15.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq15.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq15.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq15.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq15.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq15.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq15.num_retries |
Number of commands retried |
dev.nvme.[num].ioq15.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq15.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq15.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq16 |
IO Queue |
dev.nvme.[num].ioq16.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq16.dump_debug |
Dump debug data |
dev.nvme.[num].ioq16.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq16.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq16.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq16.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq16.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq16.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq16.num_retries |
Number of commands retried |
dev.nvme.[num].ioq16.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq16.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq16.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq17 |
IO Queue |
dev.nvme.[num].ioq17.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq17.dump_debug |
Dump debug data |
dev.nvme.[num].ioq17.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq17.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq17.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq17.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq17.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq17.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq17.num_retries |
Number of commands retried |
dev.nvme.[num].ioq17.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq17.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq17.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq18 |
IO Queue |
dev.nvme.[num].ioq18.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq18.dump_debug |
Dump debug data |
dev.nvme.[num].ioq18.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq18.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq18.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq18.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq18.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq18.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq18.num_retries |
Number of commands retried |
dev.nvme.[num].ioq18.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq18.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq18.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq19 |
IO Queue |
dev.nvme.[num].ioq19.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq19.dump_debug |
Dump debug data |
dev.nvme.[num].ioq19.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq19.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq19.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq19.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq19.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq19.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq19.num_retries |
Number of commands retried |
dev.nvme.[num].ioq19.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq19.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq19.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq2 |
IO Queue |
dev.nvme.[num].ioq2.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq2.dump_debug |
Dump debug data |
dev.nvme.[num].ioq2.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq2.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq2.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq2.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq2.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq2.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq2.num_retries |
Number of commands retried |
dev.nvme.[num].ioq2.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq2.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq2.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq20 |
IO Queue |
dev.nvme.[num].ioq20.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq20.dump_debug |
Dump debug data |
dev.nvme.[num].ioq20.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq20.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq20.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq20.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq20.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq20.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq20.num_retries |
Number of commands retried |
dev.nvme.[num].ioq20.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq20.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq20.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq21 |
IO Queue |
dev.nvme.[num].ioq21.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq21.dump_debug |
Dump debug data |
dev.nvme.[num].ioq21.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq21.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq21.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq21.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq21.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq21.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq21.num_retries |
Number of commands retried |
dev.nvme.[num].ioq21.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq21.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq21.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq22 |
IO Queue |
dev.nvme.[num].ioq22.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq22.dump_debug |
Dump debug data |
dev.nvme.[num].ioq22.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq22.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq22.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq22.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq22.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq22.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq22.num_retries |
Number of commands retried |
dev.nvme.[num].ioq22.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq22.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq22.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq23 |
IO Queue |
dev.nvme.[num].ioq23.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq23.dump_debug |
Dump debug data |
dev.nvme.[num].ioq23.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq23.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq23.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq23.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq23.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq23.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq23.num_retries |
Number of commands retried |
dev.nvme.[num].ioq23.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq23.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq23.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq24 |
IO Queue |
dev.nvme.[num].ioq24.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq24.dump_debug |
Dump debug data |
dev.nvme.[num].ioq24.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq24.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq24.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq24.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq24.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq24.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq24.num_retries |
Number of commands retried |
dev.nvme.[num].ioq24.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq24.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq24.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq25 |
IO Queue |
dev.nvme.[num].ioq25.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq25.dump_debug |
Dump debug data |
dev.nvme.[num].ioq25.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq25.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq25.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq25.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq25.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq25.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq25.num_retries |
Number of commands retried |
dev.nvme.[num].ioq25.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq25.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq25.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq26 |
IO Queue |
dev.nvme.[num].ioq26.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq26.dump_debug |
Dump debug data |
dev.nvme.[num].ioq26.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq26.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq26.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq26.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq26.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq26.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq26.num_retries |
Number of commands retried |
dev.nvme.[num].ioq26.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq26.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq26.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq27 |
IO Queue |
dev.nvme.[num].ioq27.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq27.dump_debug |
Dump debug data |
dev.nvme.[num].ioq27.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq27.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq27.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq27.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq27.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq27.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq27.num_retries |
Number of commands retried |
dev.nvme.[num].ioq27.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq27.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq27.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq28 |
IO Queue |
dev.nvme.[num].ioq28.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq28.dump_debug |
Dump debug data |
dev.nvme.[num].ioq28.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq28.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq28.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq28.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq28.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq28.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq28.num_retries |
Number of commands retried |
dev.nvme.[num].ioq28.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq28.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq28.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq29 |
IO Queue |
dev.nvme.[num].ioq29.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq29.dump_debug |
Dump debug data |
dev.nvme.[num].ioq29.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq29.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq29.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq29.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq29.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq29.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq29.num_retries |
Number of commands retried |
dev.nvme.[num].ioq29.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq29.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq29.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq3 |
IO Queue |
dev.nvme.[num].ioq3.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq3.dump_debug |
Dump debug data |
dev.nvme.[num].ioq3.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq3.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq3.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq3.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq3.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq3.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq3.num_retries |
Number of commands retried |
dev.nvme.[num].ioq3.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq3.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq3.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq30 |
IO Queue |
dev.nvme.[num].ioq30.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq30.dump_debug |
Dump debug data |
dev.nvme.[num].ioq30.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq30.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq30.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq30.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq30.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq30.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq30.num_retries |
Number of commands retried |
dev.nvme.[num].ioq30.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq30.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq30.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq31 |
IO Queue |
dev.nvme.[num].ioq31.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq31.dump_debug |
Dump debug data |
dev.nvme.[num].ioq31.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq31.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq31.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq31.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq31.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq31.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq31.num_retries |
Number of commands retried |
dev.nvme.[num].ioq31.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq31.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq31.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq32 |
IO Queue |
dev.nvme.[num].ioq32.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq32.dump_debug |
Dump debug data |
dev.nvme.[num].ioq32.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq32.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq32.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq32.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq32.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq32.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq32.num_retries |
Number of commands retried |
dev.nvme.[num].ioq32.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq32.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq32.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq33 |
IO Queue |
dev.nvme.[num].ioq33.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq33.dump_debug |
Dump debug data |
dev.nvme.[num].ioq33.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq33.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq33.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq33.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq33.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq33.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq33.num_retries |
Number of commands retried |
dev.nvme.[num].ioq33.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq33.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq33.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq34 |
IO Queue |
dev.nvme.[num].ioq34.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq34.dump_debug |
Dump debug data |
dev.nvme.[num].ioq34.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq34.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq34.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq34.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq34.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq34.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq34.num_retries |
Number of commands retried |
dev.nvme.[num].ioq34.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq34.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq34.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq35 |
IO Queue |
dev.nvme.[num].ioq35.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq35.dump_debug |
Dump debug data |
dev.nvme.[num].ioq35.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq35.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq35.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq35.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq35.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq35.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq35.num_retries |
Number of commands retried |
dev.nvme.[num].ioq35.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq35.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq35.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq4 |
IO Queue |
dev.nvme.[num].ioq4.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq4.dump_debug |
Dump debug data |
dev.nvme.[num].ioq4.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq4.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq4.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq4.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq4.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq4.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq4.num_retries |
Number of commands retried |
dev.nvme.[num].ioq4.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq4.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq4.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq5 |
IO Queue |
dev.nvme.[num].ioq5.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq5.dump_debug |
Dump debug data |
dev.nvme.[num].ioq5.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq5.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq5.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq5.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq5.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq5.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq5.num_retries |
Number of commands retried |
dev.nvme.[num].ioq5.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq5.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq5.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq6 |
IO Queue |
dev.nvme.[num].ioq6.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq6.dump_debug |
Dump debug data |
dev.nvme.[num].ioq6.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq6.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq6.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq6.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq6.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq6.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq6.num_retries |
Number of commands retried |
dev.nvme.[num].ioq6.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq6.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq6.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq7 |
IO Queue |
dev.nvme.[num].ioq7.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq7.dump_debug |
Dump debug data |
dev.nvme.[num].ioq7.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq7.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq7.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq7.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq7.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq7.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq7.num_retries |
Number of commands retried |
dev.nvme.[num].ioq7.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq7.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq7.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq8 |
IO Queue |
dev.nvme.[num].ioq8.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq8.dump_debug |
Dump debug data |
dev.nvme.[num].ioq8.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq8.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq8.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq8.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq8.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq8.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq8.num_retries |
Number of commands retried |
dev.nvme.[num].ioq8.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq8.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq8.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].ioq9 |
IO Queue |
dev.nvme.[num].ioq9.cq_head |
Current head of completion queue (as observed by driver) |
dev.nvme.[num].ioq9.dump_debug |
Dump debug data |
dev.nvme.[num].ioq9.num_cmds |
Number of commands submitted |
dev.nvme.[num].ioq9.num_entries |
Number of entries in hardware queue |
dev.nvme.[num].ioq9.num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].ioq9.num_ignored |
Number of interrupts posted, but were administratively ignored |
dev.nvme.[num].ioq9.num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].ioq9.num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].ioq9.num_retries |
Number of commands retried |
dev.nvme.[num].ioq9.num_trackers |
Number of trackers pre-allocated for this queue pair |
dev.nvme.[num].ioq9.sq_head |
Current head of submission queue (as observed by driver) |
dev.nvme.[num].ioq9.sq_tail |
Current tail of submission queue (as observed by driver) |
dev.nvme.[num].num_cmds |
Number of commands submitted |
dev.nvme.[num].num_failures |
Number of commands ending in failure after all retries |
dev.nvme.[num].num_ignored |
Number of interrupts ignored administratively |
dev.nvme.[num].num_intr_handler_calls |
Number of times interrupt handler was invoked (will typically be less than number of actual interrupts generated due to coalescing) |
dev.nvme.[num].num_io_queues |
Number of I/O queue pairs |
dev.nvme.[num].num_recovery_nolock |
Number of times that we failed to lock recovery in the ISR |
dev.nvme.[num].num_retries |
Number of commands retried |
dev.nvme.[num].reset_stats |
Reset statistics to zero |
dev.nvme.[num].timeout_period |
Timeout period for I/O queues (in seconds) |
dev.nvme.[num].wake |
Device set to wake the system |
dev.orm |
|
dev.orm.%parent |
parent class |
dev.orm.[num] |
|
dev.orm.[num].%desc |
device description |
dev.orm.[num].%domain |
NUMA domain |
dev.orm.[num].%driver |
device driver name |
dev.orm.[num].%iommu |
iommu unit handling the device requests |
dev.orm.[num].%location |
device location relative to parent |
dev.orm.[num].%parent |
parent device |
dev.orm.[num].%pnpinfo |
device identification |
dev.pchtherm |
|
dev.pchtherm.%parent |
parent class |
dev.pchtherm.[num] |
|
dev.pchtherm.[num].%desc |
device description |
dev.pchtherm.[num].%driver |
device driver name |
dev.pchtherm.[num].%iommu |
iommu unit handling the device requests |
dev.pchtherm.[num].%location |
device location relative to parent |
dev.pchtherm.[num].%parent |
parent device |
dev.pchtherm.[num].%pnpinfo |
device identification |
dev.pchtherm.[num].ctt |
Catastrophic Trip Point |
dev.pchtherm.[num].pmtemp |
Thermal sensor idle temperature |
dev.pchtherm.[num].pmtime |
Thermal sensor idle duration |
dev.pchtherm.[num].t0temp |
T0 temperature |
dev.pchtherm.[num].t1temp |
T1 temperature |
dev.pchtherm.[num].t2temp |
T2 temperature |
dev.pchtherm.[num].temperature |
Current temperature |
dev.pci |
|
dev.pci.%parent |
parent class |
dev.pci.[num] |
|
dev.pci.[num].%desc |
device description |
dev.pci.[num].%domain |
NUMA domain |
dev.pci.[num].%driver |
device driver name |
dev.pci.[num].%iommu |
iommu unit handling the device requests |
dev.pci.[num].%location |
device location relative to parent |
dev.pci.[num].%parent |
parent device |
dev.pci.[num].%pnpinfo |
device identification |
dev.pci.[num].wake |
Device set to wake the system |
dev.pci_link |
|
dev.pci_link.%parent |
parent class |
dev.pci_link.[num] |
|
dev.pci_link.[num].%desc |
device description |
dev.pci_link.[num].%driver |
device driver name |
dev.pci_link.[num].%iommu |
iommu unit handling the device requests |
dev.pci_link.[num].%location |
device location relative to parent |
dev.pci_link.[num].%parent |
parent device |
dev.pci_link.[num].%pnpinfo |
device identification |
dev.pcib |
|
dev.pcib.%parent |
parent class |
dev.pcib.[num] |
|
dev.pcib.[num].%desc |
device description |
dev.pcib.[num].%domain |
NUMA domain |
dev.pcib.[num].%driver |
device driver name |
dev.pcib.[num].%iommu |
iommu unit handling the device requests |
dev.pcib.[num].%location |
device location relative to parent |
dev.pcib.[num].%parent |
parent device |
dev.pcib.[num].%pnpinfo |
device identification |
dev.pcib.[num].domain |
Domain number |
dev.pcib.[num].pribus |
Primary bus number |
dev.pcib.[num].secbus |
Secondary bus number |
dev.pcib.[num].subbus |
Subordinate bus number |
dev.pcib.[num].wake |
Device set to wake the system |
dev.pcm |
|
dev.pcm.%parent |
parent class |
dev.pcm.[num] |
|
dev.pcm.[num].%desc |
device description |
dev.pcm.[num].%driver |
device driver name |
dev.pcm.[num].%iommu |
iommu unit handling the device requests |
dev.pcm.[num].%location |
device location relative to parent |
dev.pcm.[num].%parent |
parent device |
dev.pcm.[num].%pnpinfo |
device identification |
dev.pcm.[num].bitperfect |
bit-perfect playback/recording (0=disable, 1=enable) |
dev.pcm.[num].buffersize |
allocated buffer size |
dev.pcm.[num].feedback_rate |
Feedback sample rate in Hz |
dev.pcm.[num].hwvol_mixer |
|
dev.pcm.[num].hwvol_step |
|
dev.pcm.[num].mixer |
|
dev.pcm.[num].mixer.mute_0 |
Mixer control nodes |
dev.pcm.[num].mixer.mute_0.desc |
Description |
dev.pcm.[num].mixer.mute_0.max |
Maximum value |
dev.pcm.[num].mixer.mute_0.min |
Minimum value |
dev.pcm.[num].mixer.mute_0.val |
Current value |
dev.pcm.[num].mode |
mode (1=mixer, 2=play, 4=rec. The values are OR'ed if more than one mode is supported) |
dev.pcm.[num].play |
playback channels node |
dev.pcm.[num].play.32bit |
Resolution of 32bit samples (20/24/32bit) |
dev.pcm.[num].play.vchanformat |
virtual channel mixing format |
dev.pcm.[num].play.vchanmode |
vchan format/rate selection: 0=fixed, 1=passthrough, 2=adaptive |
dev.pcm.[num].play.vchanrate |
virtual channel mixing speed/rate |
dev.pcm.[num].play.vchans |
virtual channels enabled |
dev.pcm.[num].rec |
recording channels node |
dev.pcm.[num].rec.32bit |
Resolution of 32bit samples (20/24/32bit) |
dev.pcm.[num].rec.autosrc |
Automatic recording source selection |
dev.pcm.[num].rec.vchanformat |
virtual channel mixing format |
dev.pcm.[num].rec.vchanmode |
vchan format/rate selection: 0=fixed, 1=passthrough, 2=adaptive |
dev.pcm.[num].rec.vchanrate |
virtual channel mixing speed/rate |
dev.pcm.[num].rec.vchans |
virtual channels enabled |
dev.psmcpnp |
|
dev.psmcpnp.%parent |
parent class |
dev.psmcpnp.[num] |
|
dev.psmcpnp.[num].%desc |
device description |
dev.psmcpnp.[num].%driver |
device driver name |
dev.psmcpnp.[num].%iommu |
iommu unit handling the device requests |
dev.psmcpnp.[num].%location |
device location relative to parent |
dev.psmcpnp.[num].%parent |
parent device |
dev.psmcpnp.[num].%pnpinfo |
device identification |
dev.ram |
|
dev.ram.%parent |
parent class |
dev.ram.[num] |
|
dev.ram.[num].%desc |
device description |
dev.ram.[num].%driver |
device driver name |
dev.ram.[num].%iommu |
iommu unit handling the device requests |
dev.ram.[num].%location |
device location relative to parent |
dev.ram.[num].%parent |
parent device |
dev.ram.[num].%pnpinfo |
device identification |
dev.re |
|
dev.re.%parent |
parent class |
dev.re.[num] |
|
dev.re.[num].%desc |
device description |
dev.re.[num].%driver |
device driver name |
dev.re.[num].%iommu |
iommu unit handling the device requests |
dev.re.[num].%location |
device location relative to parent |
dev.re.[num].%parent |
parent device |
dev.re.[num].%pnpinfo |
device identification |
dev.re.[num].int_rx_mod |
re RX interrupt moderation |
dev.re.[num].stats |
Statistics Information |
dev.re.[num].wake |
Device set to wake the system |
dev.rgephy |
|
dev.rgephy.%parent |
parent class |
dev.rgephy.[num] |
|
dev.rgephy.[num].%desc |
device description |
dev.rgephy.[num].%driver |
device driver name |
dev.rgephy.[num].%iommu |
iommu unit handling the device requests |
dev.rgephy.[num].%location |
device location relative to parent |
dev.rgephy.[num].%parent |
parent device |
dev.rgephy.[num].%pnpinfo |
device identification |
dev.rtwn |
|
dev.rtwn.%parent |
parent class |
dev.rtwn.[num] |
|
dev.rtwn.[num].%desc |
device description |
dev.rtwn.[num].%driver |
device driver name |
dev.rtwn.[num].%iommu |
iommu unit handling the device requests |
dev.rtwn.[num].%location |
device location relative to parent |
dev.rtwn.[num].%parent |
parent device |
dev.rtwn.[num].%pnpinfo |
device identification |
dev.rtwn.[num].debug |
Control debugging printfs |
dev.rtwn.[num].ena_tsf64 |
Enable/disable per-packet TSF64 reporting |
dev.rtwn.[num].ht40 |
Enable 40 MHz mode support |
dev.rtwn.[num].hwcrypto |
Enable h/w crypto: 0 - disable, 1 - pairwise keys, 2 - all keys |
dev.rtwn.[num].radar_detection |
Enable radar detection (untested) |
dev.rtwn.[num].ratectl |
Select rate control mechanism: 0 - disabled, 1 - via net80211, 2 - via firmware |
dev.rtwn.[num].ratectl_selected |
Currently selected rate control mechanism (by the driver) |
dev.rtwn.[num].reg_addr |
debug register address |
dev.rtwn.[num].reg_val |
debug register read/write |
dev.rtwn.[num].rx_buf_size |
Rx buffer size, 512-byte units [4...64] |
dev.smbios |
|
dev.smbios.%parent |
parent class |
dev.smbios.[num] |
|
dev.smbios.[num].%desc |
device description |
dev.smbios.[num].%driver |
device driver name |
dev.smbios.[num].%iommu |
iommu unit handling the device requests |
dev.smbios.[num].%location |
device location relative to parent |
dev.smbios.[num].%parent |
parent device |
dev.smbios.[num].%pnpinfo |
device identification |
dev.smbus |
|
dev.smbus.%parent |
parent class |
dev.smbus.[num] |
|
dev.smbus.[num].%desc |
device description |
dev.smbus.[num].%domain |
NUMA domain |
dev.smbus.[num].%driver |
device driver name |
dev.smbus.[num].%iommu |
iommu unit handling the device requests |
dev.smbus.[num].%location |
device location relative to parent |
dev.smbus.[num].%parent |
parent device |
dev.smbus.[num].%pnpinfo |
device identification |
dev.storvsc |
|
dev.storvsc.%parent |
parent class |
dev.storvsc.[num] |
|
dev.storvsc.[num].%desc |
device description |
dev.storvsc.[num].%driver |
device driver name |
dev.storvsc.[num].%iommu |
iommu unit handling the device requests |
dev.storvsc.[num].%location |
device location relative to parent |
dev.storvsc.[num].%parent |
parent device |
dev.storvsc.[num].%pnpinfo |
device identification |
dev.storvsc.[num].channel |
|
dev.storvsc.[num].channel.[num] |
|
dev.storvsc.[num].channel.[num].br |
|
dev.storvsc.[num].channel.[num].br.rx |
|
dev.storvsc.[num].channel.[num].br.rx.state |
rx state |
dev.storvsc.[num].channel.[num].br.rx.state_bin |
rx binary state |
dev.storvsc.[num].channel.[num].br.tx |
|
dev.storvsc.[num].channel.[num].br.tx.state |
tx state |
dev.storvsc.[num].channel.[num].br.tx.state_bin |
tx binary state |
dev.storvsc.[num].channel.[num].cpu |
owner CPU id |
dev.storvsc.[num].channel.[num].mnf |
has monitor notification facilities |
dev.storvsc.[num].channel.[num].send_req |
# of request sending from this channel |
dev.storvsc.[num].data_bio_cnt |
# of bio data block |
dev.storvsc.[num].data_sg_cnt |
# of sg data block |
dev.storvsc.[num].data_vaddr_cnt |
# of vaddr data block |
dev.uart |
|
dev.uart.%parent |
parent class |
dev.uart.[num] |
|
dev.uart.[num].%desc |
device description |
dev.uart.[num].%driver |
device driver name |
dev.uart.[num].%iommu |
iommu unit handling the device requests |
dev.uart.[num].%location |
device location relative to parent |
dev.uart.[num].%parent |
parent device |
dev.uart.[num].%pnpinfo |
device identification |
dev.uart.[num].pps_mode |
pulse mode: 0/1/2=disabled/CTS/DCD; add 0x10 to invert, 0x20 for narrow pulse |
dev.uart.[num].rx_overruns |
Receive overruns |
dev.uart.[num].wake |
Device set to wake the system |
dev.uaudio |
|
dev.uaudio.%parent |
parent class |
dev.uaudio.[num] |
|
dev.uaudio.[num].%desc |
device description |
dev.uaudio.[num].%driver |
device driver name |
dev.uaudio.[num].%iommu |
iommu unit handling the device requests |
dev.uaudio.[num].%location |
device location relative to parent |
dev.uaudio.[num].%parent |
parent device |
dev.uaudio.[num].%pnpinfo |
device identification |
dev.ubt |
|
dev.ubt.%parent |
parent class |
dev.ubt.[num] |
|
dev.ubt.[num].%desc |
device description |
dev.ubt.[num].%driver |
device driver name |
dev.ubt.[num].%iommu |
iommu unit handling the device requests |
dev.ubt.[num].%location |
device location relative to parent |
dev.ubt.[num].%parent |
parent device |
dev.ubt.[num].%pnpinfo |
device identification |
dev.uhci |
|
dev.uhci.%parent |
parent class |
dev.uhci.[num] |
|
dev.uhci.[num].%desc |
device description |
dev.uhci.[num].%domain |
NUMA domain |
dev.uhci.[num].%driver |
device driver name |
dev.uhci.[num].%iommu |
iommu unit handling the device requests |
dev.uhci.[num].%location |
device location relative to parent |
dev.uhci.[num].%parent |
parent device |
dev.uhci.[num].%pnpinfo |
device identification |
dev.uhci.[num].wake |
Device set to wake the system |
dev.uhub |
|
dev.uhub.%parent |
parent class |
dev.uhub.[num] |
|
dev.uhub.[num].%desc |
device description |
dev.uhub.[num].%domain |
NUMA domain |
dev.uhub.[num].%driver |
device driver name |
dev.uhub.[num].%iommu |
iommu unit handling the device requests |
dev.uhub.[num].%location |
device location relative to parent |
dev.uhub.[num].%parent |
parent device |
dev.uhub.[num].%pnpinfo |
device identification |
dev.uhub.[num].disable_enumeration |
Set to disable enumeration on this USB HUB. |
dev.uhub.[num].disable_port_power |
Set to disable USB port power on this USB HUB. |
dev.ukbd |
|
dev.ukbd.%parent |
parent class |
dev.ukbd.[num] |
|
dev.ukbd.[num].%desc |
device description |
dev.ukbd.[num].%domain |
NUMA domain |
dev.ukbd.[num].%driver |
device driver name |
dev.ukbd.[num].%iommu |
iommu unit handling the device requests |
dev.ukbd.[num].%location |
device location relative to parent |
dev.ukbd.[num].%parent |
parent device |
dev.ukbd.[num].%pnpinfo |
device identification |
dev.ulpt |
|
dev.ulpt.%parent |
parent class |
dev.ulpt.[num] |
|
dev.ulpt.[num].%desc |
device description |
dev.ulpt.[num].%driver |
device driver name |
dev.ulpt.[num].%iommu |
iommu unit handling the device requests |
dev.ulpt.[num].%location |
device location relative to parent |
dev.ulpt.[num].%parent |
parent device |
dev.ulpt.[num].%pnpinfo |
device identification |
dev.umass |
|
dev.umass.%parent |
parent class |
dev.umass.[num] |
|
dev.umass.[num].%desc |
device description |
dev.umass.[num].%domain |
NUMA domain |
dev.umass.[num].%driver |
device driver name |
dev.umass.[num].%iommu |
iommu unit handling the device requests |
dev.umass.[num].%location |
device location relative to parent |
dev.umass.[num].%parent |
parent device |
dev.umass.[num].%pnpinfo |
device identification |
dev.umodem |
|
dev.umodem.%parent |
parent class |
dev.umodem.[num] |
|
dev.umodem.[num].%desc |
device description |
dev.umodem.[num].%driver |
device driver name |
dev.umodem.[num].%iommu |
iommu unit handling the device requests |
dev.umodem.[num].%location |
device location relative to parent |
dev.umodem.[num].%parent |
parent device |
dev.umodem.[num].%pnpinfo |
device identification |
dev.umodem.[num].ttyname |
TTY device basename |
dev.umodem.[num].ttyports |
Number of ports |
dev.ums |
|
dev.ums.%parent |
parent class |
dev.ums.[num] |
|
dev.ums.[num].%desc |
device description |
dev.ums.[num].%domain |
NUMA domain |
dev.ums.[num].%driver |
device driver name |
dev.ums.[num].%iommu |
iommu unit handling the device requests |
dev.ums.[num].%location |
device location relative to parent |
dev.ums.[num].%parent |
parent device |
dev.ums.[num].%pnpinfo |
device identification |
dev.ums.[num].parseinfo |
Dump of parsed HID report descriptor |
dev.usbhid |
|
dev.usbhid.%parent |
parent class |
dev.usbhid.[num] |
|
dev.usbhid.[num].%desc |
device description |
dev.usbhid.[num].%driver |
device driver name |
dev.usbhid.[num].%iommu |
iommu unit handling the device requests |
dev.usbhid.[num].%location |
device location relative to parent |
dev.usbhid.[num].%parent |
parent device |
dev.usbhid.[num].%pnpinfo |
device identification |
dev.usbus |
|
dev.usbus.%parent |
parent class |
dev.usbus.[num] |
|
dev.usbus.[num].%desc |
device description |
dev.usbus.[num].%domain |
NUMA domain |
dev.usbus.[num].%driver |
device driver name |
dev.usbus.[num].%iommu |
iommu unit handling the device requests |
dev.usbus.[num].%location |
device location relative to parent |
dev.usbus.[num].%parent |
parent device |
dev.usbus.[num].%pnpinfo |
device identification |
dev.vga |
|
dev.vga.%parent |
parent class |
dev.vga.[num] |
|
dev.vga.[num].%desc |
device description |
dev.vga.[num].%driver |
device driver name |
dev.vga.[num].%iommu |
iommu unit handling the device requests |
dev.vga.[num].%location |
device location relative to parent |
dev.vga.[num].%parent |
parent device |
dev.vga.[num].%pnpinfo |
device identification |
dev.vgapci |
|
dev.vgapci.%parent |
parent class |
dev.vgapci.[num] |
|
dev.vgapci.[num].%desc |
device description |
dev.vgapci.[num].%domain |
NUMA domain |
dev.vgapci.[num].%driver |
device driver name |
dev.vgapci.[num].%iommu |
iommu unit handling the device requests |
dev.vgapci.[num].%location |
device location relative to parent |
dev.vgapci.[num].%parent |
parent device |
dev.vgapci.[num].%pnpinfo |
device identification |
dev.vgapci.[num].wake |
Device set to wake the system |
dev.vmbus |
|
dev.vmbus.%parent |
parent class |
dev.vmbus.[num] |
|
dev.vmbus.[num].%desc |
device description |
dev.vmbus.[num].%driver |
device driver name |
dev.vmbus.[num].%iommu |
iommu unit handling the device requests |
dev.vmbus.[num].%location |
device location relative to parent |
dev.vmbus.[num].%parent |
parent device |
dev.vmbus.[num].%pnpinfo |
device identification |
dev.vmbus.[num].version |
vmbus version |
dev.vmbus_res |
|
dev.vmbus_res.%parent |
parent class |
dev.vmbus_res.[num] |
|
dev.vmbus_res.[num].%desc |
device description |
dev.vmbus_res.[num].%driver |
device driver name |
dev.vmbus_res.[num].%iommu |
iommu unit handling the device requests |
dev.vmbus_res.[num].%location |
device location relative to parent |
dev.vmbus_res.[num].%parent |
parent device |
dev.vmbus_res.[num].%pnpinfo |
device identification |
dev.vmgenc |
|
dev.vmgenc.%parent |
parent class |
dev.vmgenc.[num] |
|
dev.vmgenc.[num].%desc |
device description |
dev.vmgenc.[num].%driver |
device driver name |
dev.vmgenc.[num].%iommu |
iommu unit handling the device requests |
dev.vmgenc.[num].%location |
device location relative to parent |
dev.vmgenc.[num].%parent |
parent device |
dev.vmgenc.[num].%pnpinfo |
device identification |
dev.vmgenc.[num].guid |
latest cached VM generation counter (128-bit UUID) |
dev.vtvga |
|
dev.vtvga.%parent |
parent class |
dev.vtvga.[num] |
|
dev.vtvga.[num].%desc |
device description |
dev.vtvga.[num].%driver |
device driver name |
dev.vtvga.[num].%iommu |
iommu unit handling the device requests |
dev.vtvga.[num].%location |
device location relative to parent |
dev.vtvga.[num].%parent |
parent device |
dev.vtvga.[num].%pnpinfo |
device identification |
dev.xen |
Xen |
dev.xen.balloon |
Balloon |
dev.xen.balloon.current |
Current allocation |
dev.xen.balloon.driver_pages |
Driver pages |
dev.xen.balloon.hard_limit |
Xen hard limit |
dev.xen.balloon.high_mem |
High-mem balloon |
dev.xen.balloon.low_mem |
Low-mem balloon |
dev.xen.balloon.target |
Target allocation |
dev.xen.xsd_kva |
|
dev.xen.xsd_port |
|
dev.xhci |
|
dev.xhci.%parent |
parent class |
dev.xhci.[num] |
|
dev.xhci.[num].%desc |
device description |
dev.xhci.[num].%driver |
device driver name |
dev.xhci.[num].%iommu |
iommu unit handling the device requests |
dev.xhci.[num].%location |
device location relative to parent |
dev.xhci.[num].%parent |
parent device |
dev.xhci.[num].%pnpinfo |
device identification |
dev.xhci.[num].wake |
Device set to wake the system |
hw |
hardware |
hw.aac |
AAC driver parameters |
hw.aac.enable_msi |
Enable MSI interrupts |
hw.aacraid |
AACRAID driver parameters |
hw.acpi |
|
hw.acpi.acline |
|
hw.acpi.apei |
ACPI Platform Error Interface |
hw.acpi.apei.log_corrected |
Log corrected errors to the console |
hw.acpi.battery |
battery status and info |
hw.acpi.battery.info_expire |
time in seconds until info is refreshed |
hw.acpi.battery.life |
percent capacity remaining |
hw.acpi.battery.rate |
present rate in mW |
hw.acpi.battery.state |
current status flags |
hw.acpi.battery.time |
remaining time in minutes |
hw.acpi.battery.units |
number of batteries |
hw.acpi.cpu |
node for CPU children |
hw.acpi.cpu.cppc_notify |
Register for CPPC Notifications |
hw.acpi.cpu.cx_lowest |
Global lowest Cx sleep state to use |
hw.acpi.disable_on_reboot |
Disable ACPI when rebooting/halting system |
hw.acpi.handle_reboot |
Use ACPI Reset Register to reboot |
hw.acpi.lid_switch_state |
Lid ACPI sleep state. Set to S3 if you want to suspend your laptop when close the Lid. |
hw.acpi.override_isa_irq_polarity |
Force active-hi polarity for edge-triggered ISA IRQs |
hw.acpi.power_button_state |
Power button ACPI sleep state. |
hw.acpi.reset_video |
Call the VESA reset BIOS vector on the resume path |
hw.acpi.s4bios |
S4BIOS mode |
hw.acpi.sleep_button_state |
Sleep button ACPI sleep state. |
hw.acpi.sleep_delay |
sleep delay in seconds |
hw.acpi.standby_state |
|
hw.acpi.supported_sleep_state |
List supported ACPI sleep states. |
hw.acpi.suspend_state |
|
hw.acpi.thermal |
|
hw.acpi.thermal.min_runtime |
minimum cooling run time in sec |
hw.acpi.thermal.polling_rate |
monitor polling interval in seconds |
hw.acpi.thermal.tz0 |
|
hw.acpi.thermal.tz0._ACx |
|
hw.acpi.thermal.tz0._CR3 |
too warm temp setpoint (standby now) |
hw.acpi.thermal.tz0._CRT |
critical temp setpoint (shutdown now) |
hw.acpi.thermal.tz0._HOT |
too hot temp setpoint (suspend now) |
hw.acpi.thermal.tz0._PSV |
passive cooling temp setpoint |
hw.acpi.thermal.tz0._TC1 |
thermal constant 1 for passive cooling |
hw.acpi.thermal.tz0._TC2 |
thermal constant 2 for passive cooling |
hw.acpi.thermal.tz0._TSP |
thermal sampling period for passive cooling |
hw.acpi.thermal.tz0.active |
cooling is active |
hw.acpi.thermal.tz0.passive_cooling |
enable passive (speed reduction) cooling |
hw.acpi.thermal.tz0.temperature |
current thermal zone temperature |
hw.acpi.thermal.tz0.thermal_flags |
thermal zone flags |
hw.acpi.thermal.tz1 |
|
hw.acpi.thermal.tz1._ACx |
|
hw.acpi.thermal.tz1._CR3 |
too warm temp setpoint (standby now) |
hw.acpi.thermal.tz1._CRT |
critical temp setpoint (shutdown now) |
hw.acpi.thermal.tz1._HOT |
too hot temp setpoint (suspend now) |
hw.acpi.thermal.tz1._PSV |
passive cooling temp setpoint |
hw.acpi.thermal.tz1._TC1 |
thermal constant 1 for passive cooling |
hw.acpi.thermal.tz1._TC2 |
thermal constant 2 for passive cooling |
hw.acpi.thermal.tz1._TSP |
thermal sampling period for passive cooling |
hw.acpi.thermal.tz1.active |
cooling is active |
hw.acpi.thermal.tz1.passive_cooling |
enable passive (speed reduction) cooling |
hw.acpi.thermal.tz1.temperature |
current thermal zone temperature |
hw.acpi.thermal.tz1.thermal_flags |
thermal zone flags |
hw.acpi.thermal.user_override |
allow override of thermal settings |
hw.acpi.verbose |
verbose mode |
hw.amdgpu |
AMD GPU parameters |
hw.amdgpu.abmlevel |
ABM level (0 = off (default), 1-4 = backlight reduction level) |
hw.amdgpu.aspm |
ASPM support (1 = enable, 0 = disable, -1 = auto) |
hw.amdgpu.async_gfx_ring |
Asynchronous GFX rings that could be configured with either different priorities (HP3D ring and LP3D ring), or equal priorities (0 = disabled, 1 = enabled (default)) |
hw.amdgpu.audio |
Audio enable (-1 = auto, 0 = disable, 1 = enable) |
hw.amdgpu.backlight |
Backlight control (0 = pwm, 1 = aux, -1 auto (default)) |
hw.amdgpu.bad_page_threshold |
Bad page threshold(-1 = auto(default value), 0 = disable bad page retirement, -2 = ignore bad page threshold) |
hw.amdgpu.bapm |
BAPM support (1 = enable, 0 = disable, -1 = auto) |
hw.amdgpu.benchmark |
Run benchmark |
hw.amdgpu.cg_mask |
Clockgating flags mask (0 = disable clock gating) |
hw.amdgpu.cik_support |
CIK support (1 = enabled (default), 0 = disabled) |
hw.amdgpu.compute_multipipe |
Force compute queues to be spread across pipes (1 = enable, 0 = disable, -1 = auto) |
hw.amdgpu.dc |
Display Core driver (1 = enable, 0 = disable, -1 = auto (default)) |
hw.amdgpu.dcdebugmask |
all debug options disabled (default)) |
hw.amdgpu.dcfeaturemask |
all stable DC features enabled (default)) |
hw.amdgpu.deep_color |
Deep Color support (1 = enable, 0 = disable (default)) |
hw.amdgpu.discovery |
Allow driver to discover hardware IPs from IP Discovery table at the top of VRAM |
hw.amdgpu.disp_priority |
Display Priority (0 = auto, 1 = normal, 2 = high) |
hw.amdgpu.dpm |
DPM support (1 = enable, 0 = disable, -1 = auto) |
hw.amdgpu.emu_mode |
Emulation mode, (1 = enable, 0 = disable) |
hw.amdgpu.exp_hw_support |
experimental hw support (1 = enable, 0 = disable (default)) |
hw.amdgpu.force_asic_type |
A non negative value used to specify the asic type for all supported GPUs |
hw.amdgpu.forcelongtraining |
force memory long training |
hw.amdgpu.freesync_video |
Enable freesync modesetting optimization feature (0 = off (default), 1 = on) |
hw.amdgpu.fw_load_type |
firmware loading type (3 = rlc backdoor autoload if supported, 2 = smu load if supported, 1 = psp load, 0 = force direct if supported, -1 = auto) |
hw.amdgpu.gartsize |
Size of GART to setup in megabytes (32, 64, etc., -1=auto) |
hw.amdgpu.gpu_recovery |
Enable GPU recovery mechanism, (2 = advanced tdr mode, 1 = enable, 0 = disable, -1 = auto) |
hw.amdgpu.gttsize |
Size of the GTT domain in megabytes (-1 = auto) |
hw.amdgpu.hw_i2c |
hw i2c engine enable (0 = disable) |
hw.amdgpu.ip_block_mask |
IP Block Mask (all blocks enabled (default)) |
hw.amdgpu.job_hang_limit |
how much time allow a job hang and not drop it (default 0) |
hw.amdgpu.lbpw |
Load Balancing Per Watt (LBPW) support (1 = enable, 0 = disable, -1 = auto) |
hw.amdgpu.lockup_timeout |
GPU lockup timeout in ms (default: for bare metal 10000 for non-compute jobs and 60000 for compute jobs; for passthrough or sriov, 10000 for all jobs. 0: keep default value. negative: infinity timeout), format: for bare metal [Non-Compute] or [GFX,Compute,SDMA,Video]; for passthrough or sriov [all jobs] or [GFX,Compute,SDMA,Video]. |
hw.amdgpu.mcbp |
Enable Mid-command buffer preemption (0 = disabled (default), 1 = enabled) |
hw.amdgpu.mes |
Enable Micro Engine Scheduler (0 = disabled (default), 1 = enabled) |
hw.amdgpu.mes_kiq |
Enable Micro Engine Scheduler KIQ (0 = disabled (default), 1 = enabled) |
hw.amdgpu.moverate |
Maximum buffer migration rate in MB/s. (32, 64, etc., -1=auto, 0=1=disabled) |
hw.amdgpu.msi |
MSI support (1 = enable, 0 = disable, -1 = auto) |
hw.amdgpu.noretry |
Disable retry faults (0 = retry enabled, 1 = retry disabled, -1 auto (default)) |
hw.amdgpu.num_kcq |
number of kernel compute queue user want to setup (8 if set to greater than 8 or less than 0, only affect gfx 8+) |
hw.amdgpu.pcie_gen2 |
PCIE Gen2 mode (-1 = auto, 0 = disable, 1 = enable) |
hw.amdgpu.pcie_gen_cap |
PCIE Gen Caps (0: autodetect (default)) |
hw.amdgpu.pcie_lane_cap |
PCIE Lane Caps (0: autodetect (default)) |
hw.amdgpu.pg_mask |
Powergating flags mask (0 = disable power gating) |
hw.amdgpu.ppfeaturemask |
all power features enabled (default)) |
hw.amdgpu.ras_enable |
Enable RAS features on the GPU (0 = disable, 1 = enable, -1 = auto (default)) |
hw.amdgpu.ras_mask |
Mask of RAS features to enable (default 0xffffffff), only valid when ras_enable == 1 |
hw.amdgpu.reset_method |
GPU reset method (-1 = auto (default), 0 = legacy, 1 = mode0, 2 = mode1, 3 = mode2, 4 = baco/bamaco) |
hw.amdgpu.runpm |
PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = auto) |
hw.amdgpu.sched_hw_submission |
the max number of HW submissions (default 2) |
hw.amdgpu.sched_jobs |
the max number of jobs supported in the sw queue (default 32) |
hw.amdgpu.sdma_phase_quantum |
SDMA context switch phase quantum (x 1K GPU clock cycles, 0 = no change (default 32)) |
hw.amdgpu.sg_display |
S/G Display (-1 = auto (default), 0 = disable) |
hw.amdgpu.si_support |
SI support (1 = enabled (default), 0 = disabled) |
hw.amdgpu.smu_memory_pool_size |
reserve gtt for smu debug usage, 0 = disable,0x1 = 256Mbyte, 0x2 = 512Mbyte, 0x4 = 1 Gbyte, 0x8 = 2GByte |
hw.amdgpu.smu_pptable_id |
specify pptable id to be used (-1 = auto(default) value, 0 = use pptable from vbios, > 0 = soft pptable id) |
hw.amdgpu.test |
Run tests |
hw.amdgpu.timeout_fatal_disable |
disable watchdog timeout fatal error (false = default) |
hw.amdgpu.timeout_period |
watchdog timeout period (0 = timeout disabled, 1 ~ 0x23 = timeout maxcycles = (1 << period) |
hw.amdgpu.tmz |
Enable TMZ feature (-1 = auto (default), 0 = off, 1 = on) |
hw.amdgpu.use_xgmi_p2p |
Enable XGMI P2P interface (0 = disable; 1 = enable (default)) |
hw.amdgpu.vcnfw_log |
Enable vcnfw log(0 = disable (default value), 1 = enable) |
hw.amdgpu.vis_vramlimit |
Restrict visible VRAM for testing, in megabytes |
hw.amdgpu.visualconfirm |
Visual confirm (0 = off (default), 1 = MPO, 5 = PSR) |
hw.amdgpu.vm_block_size |
VM page table size in bits (default depending on vm_size) |
hw.amdgpu.vm_debug |
Debug VM handling (0 = disabled (default), 1 = enabled) |
hw.amdgpu.vm_fault_stop |
Stop on VM fault (0 = never (default), 1 = print first, 2 = always) |
hw.amdgpu.vm_fragment_size |
VM fragment size in bits (4, 5, etc. 4 = 64K (default), Max 9 = 2M) |
hw.amdgpu.vm_size |
VM address space size in gigabytes (default 64GB) |
hw.amdgpu.vm_update_mode |
VM update using CPU (0 = never (default except for large BAR(LB)), 1 = Graphics only, 2 = Compute only (default for LB), 3 = Both |
hw.amdgpu.vramlimit |
Restrict VRAM for testing, in megabytes |
hw.apic |
APIC options |
hw.apic.ds_idle_timeout |
timeout (in us) for APIC Delivery Status to become Idle (xAPIC only) |
hw.apic.enable_extint |
Enable the ExtINT pin in the first I/O APIC |
hw.apic.eoi_suppression |
|
hw.apic.timer_tsc_deadline |
|
hw.apic.x2apic_mode |
|
hw.ata |
ATA driver parameters |
hw.ata.ata_dma_check_80pin |
Check for 80pin cable before setting ATA DMA mode |
hw.ath |
Atheros driver parameters |
hw.ath.anical |
ANI calibration (msecs) |
hw.ath.bstuck |
max missed beacon xmits before chip reset |
hw.ath.hal |
Atheros HAL parameters |
hw.ath.longcal |
long chip calibration interval (secs) |
hw.ath.resetcal |
reset chip calibration results (secs) |
hw.ath.rxbuf |
rx buffers allocated |
hw.ath.shortcal |
short chip calibration interval (msecs) |
hw.ath.txbuf |
tx buffers allocated |
hw.ath.txbuf_mgmt |
tx (mgmt) buffers allocated |
hw.atkbd |
AT keyboard |
hw.atkbd.hz |
Polling frequency (in hz) |
hw.availpages |
Amount of physical memory (in pages) |
hw.bce |
bce driver parameters |
hw.bce.hdr_split |
Frame header/payload splitting Enable/Disable |
hw.bce.msi_enable |
MSI-X|MSI|INTx selector |
hw.bce.rx_pages |
Receive buffer descriptor pages (1 page = 255 buffer descriptors) |
hw.bce.rx_quick_cons_trip |
Receive BD trip point |
hw.bce.rx_quick_cons_trip_int |
Receive BD trip point during interrupts |
hw.bce.rx_ticks |
Receive ticks count |
hw.bce.rx_ticks_int |
Receive ticks count during interrupt |
hw.bce.strict_rx_mtu |
Enable/Disable strict RX frame size checking |
hw.bce.tso_enable |
TSO Enable/Disable |
hw.bce.tx_pages |
Transmit buffer descriptor pages (1 page = 255 buffer descriptors) |
hw.bce.tx_quick_cons_trip |
Transmit BD trip point |
hw.bce.tx_quick_cons_trip_int |
Transmit BD trip point during interrupts |
hw.bce.tx_ticks |
Transmit ticks count |
hw.bce.tx_ticks_int |
Transmit ticks count during interrupt |
hw.bce.verbose |
Verbose output enable/disable |
hw.bge |
BGE driver parameters |
hw.bge.allow_asf |
Allow ASF mode if available |
hw.broken_txfifo |
UART FIFO has QEMU emulation bug |
hw.bus |
|
hw.bus.devctl_nomatch_enabled |
enable nomatch events |
hw.bus.devctl_queue |
devctl queue length |
hw.bus.devices |
system device tree |
hw.bus.disable_failed_devices |
Do not retry attaching devices that return an error from DEVICE_ATTACH the first time |
hw.bus.info |
bus-related data |
hw.bus.rman |
kernel resource manager |
hw.busdma |
Busdma parameters |
hw.busdma.total_bpages |
Total bounce pages |
hw.busdma.zone0 |
|
hw.busdma.zone0.active_bpages |
Active bounce pages |
hw.busdma.zone0.alignment |
|
hw.busdma.zone0.domain |
memory domain |
hw.busdma.zone0.free_bpages |
Free bounce pages |
hw.busdma.zone0.lowaddr |
|
hw.busdma.zone0.reserved_bpages |
Reserved bounce pages |
hw.busdma.zone0.total_bounced |
Total bounce requests (pages bounced) |
hw.busdma.zone0.total_bpages |
Total bounce pages |
hw.busdma.zone0.total_deferred |
Total bounce requests that were deferred |
hw.busdma.zone0.total_deferred_time |
Cumulative time busdma requests are deferred (us) |
hw.busdma.zone1 |
|
hw.busdma.zone1.active_bpages |
Active bounce pages |
hw.busdma.zone1.alignment |
|
hw.busdma.zone1.domain |
memory domain |
hw.busdma.zone1.free_bpages |
Free bounce pages |
hw.busdma.zone1.lowaddr |
|
hw.busdma.zone1.reserved_bpages |
Reserved bounce pages |
hw.busdma.zone1.total_bounced |
Total bounce requests (pages bounced) |
hw.busdma.zone1.total_bpages |
Total bounce pages |
hw.busdma.zone1.total_deferred |
Total bounce requests that were deferred |
hw.bxe |
bxe driver parameters |
hw.bxe.autogreeen |
AutoGrEEEn support |
hw.bxe.debug |
Debug logging mode |
hw.bxe.hc_rx_ticks |
Host Coalescing Rx ticks |
hw.bxe.hc_tx_ticks |
Host Coalescing Tx ticks |
hw.bxe.interrupt_mode |
Interrupt (MSI-X/MSI/INTx) mode |
hw.bxe.max_aggregation_size |
max aggregation size |
hw.bxe.max_rx_bufs |
Maximum Number of Rx Buffers Per Queue |
hw.bxe.mrrs |
PCIe maximum read request size |
hw.bxe.queue_count |
Multi-Queue queue count |
hw.bxe.rx_budget |
Rx processing budget |
hw.bxe.udp_rss |
UDP RSS support |
hw.byteorder |
System byte order |
hw.cardbus |
CardBus parameters |
hw.cardbus.cis_debug |
CardBus CIS debug |
hw.cardbus.debug |
CardBus debug |
hw.cbb |
CBB parameters |
hw.cbb.debug |
Verbose cardbus bridge debugging |
hw.cbb.start_16_io |
Starting ioport for 16-bit cards |
hw.cbb.start_32_io |
Starting ioport for 32-bit cards |
hw.cbb.start_memory |
Starting address for memory allocations |
hw.ciss |
CISS sysctl tunables |
hw.ciss.base_transfer_speed |
force a specific base transfer_speed |
hw.ciss.expose_hidden_physical |
expose hidden physical drives |
hw.ciss.force_interrupt |
use default (0), force INTx (1) or force MSIx(2) interrupts |
hw.ciss.force_transport |
use default (0), force simple (1) or force performant (2) transport |
hw.ciss.initiator_id |
force a specific initiator id |
hw.ciss.nop_message_heartbeat |
nop heartbeat messages |
hw.ciss.verbose |
enable verbose messages |
hw.clockrate |
CPU instruction clock rate |
hw.dri |
DRI args |
hw.dri.[num] |
|
hw.dri.[num].busid |
|
hw.dri.[num].clients |
|
hw.dri.[num].modesetting |
|
hw.dri.[num].name |
|
hw.dri.[num].vblank |
|
hw.dri.__drm_debug |
drm debug flags |
hw.dri.debug |
Enable debug output, where each bit enables a debug category.
Bit 0 (0x01) will enable CORE messages (drm core code)
Bit 1 (0x02) will enable DRIVER messages (drm controller code)
Bit 2 (0x04) will enable KMS messages (modesetting code)
Bit 3 (0x08) will enable PRIME messages (prime code)
Bit 4 (0x10) will enable ATOMIC messages (atomic code)
Bit 5 (0x20) will enable VBL messages (vblank code)
Bit 7 (0x80) will enable LEASE messages (leasing code)
Bit 8 (0x100) will enable DP messages (displayport code) |
hw.dri.dp_aux_i2c_speed_khz |
Assumed speed of the i2c bus in kHz, (1-400, default 10) |
hw.dri.dp_aux_i2c_transfer_size |
Number of bytes to transfer in a single I2C over DP AUX CH message, (1-16, default 16) |
hw.dri.drm_debug_persist |
keep drm debug flags post-load |
hw.dri.drm_fbdev_overalloc |
Overallocation of the fbdev buffer (%) [default=100] |
hw.dri.edid_fixup |
Minimum number of valid EDID header bytes (0-8, default 6) |
hw.dri.fbdev_emulation |
Enable legacy fbdev emulation [default=true] |
hw.dri.poll |
help drm kms poll |
hw.dri.skip_ddb |
go straight to dumping core |
hw.dri.timestamp_precision |
|
hw.dri.timestamp_precision_usec |
Max. error on timestamps [usecs] |
hw.dri.vblank_offdelay |
|
hw.dri.vblankoffdelay |
Delay until vblank irq auto-disable [msecs] (0: never disable, <0: disable immediately) |
hw.efi |
EFI |
hw.efi.poweroff |
If true, use EFI runtime services to power off in preference to ACPI |
hw.efi.print_faults |
Print fault information upon trap from EFIRT calls: 0 - never, 1 - once, 2 - always |
hw.efi.total_faults |
Total number of faults that occurred during EFIRT calls |
hw.em |
EM driver parameters |
hw.em.disable_crc_stripping |
Disable CRC Stripping |
hw.em.eee_setting |
Enable Energy Efficient Ethernet |
hw.em.enable_aim |
Enable adaptive interrupt moderation (1=normal, 2=lowlatency) |
hw.em.max_interrupt_rate |
Maximum interrupts per second |
hw.em.rx_abs_int_delay |
Default receive interrupt delay limit in usecs |
hw.em.rx_int_delay |
Default receive interrupt delay in usecs |
hw.em.rx_process_limit |
Maximum number of received packets to process at a time, -1 means unlimited |
hw.em.sbp |
Show bad packets in promiscuous mode |
hw.em.smart_pwr_down |
Set to true to leave smart power down enabled on newer adapters |
hw.em.tx_abs_int_delay |
Default transmit interrupt delay limit in usecs |
hw.em.tx_int_delay |
Default transmit interrupt delay in usecs |
hw.em.unsupported_tso |
Allow unsupported em(4) TSO configurations |
hw.ena |
ENA driver parameters |
hw.ena.driver_version |
ENA driver version |
hw.ena.enable_9k_mbufs |
Use 9 kB mbufs for Rx descriptors |
hw.ena.force_large_llq_header |
Change default LLQ entry size received from the device |
hw.ena.log_level |
Logging level indicating verbosity of the logs |
hw.floatingpoint |
Floating point instructions executed in hardware |
hw.hid |
HID debugging |
hw.hid.debug |
Debug level |
hw.hid.hkbd |
USB keyboard |
hw.hid.hkbd.debug |
Debug level |
hw.hid.hkbd.no_leds |
Disables setting of keyboard leds |
hw.hid.hmt |
MSWindows 7/8/10 compatible HID Multi-touch Device |
hw.hid.hmt.timestamps |
Enable hardware timestamp reporting |
hw.hn |
Hyper-V network interface |
hw.hn.chan_cnt |
# of channels to use; each channel has one RX ring and one TX ring |
hw.hn.direct_tx_size |
Size of the packet for direct transmission |
hw.hn.enable_udp4cs |
Offload UDP/IPv4 checksum |
hw.hn.enable_udp6cs |
Offload UDP/IPv6 checksum |
hw.hn.lro_entry_count |
LRO entry count |
hw.hn.lro_mbufq_depth |
Depth of LRO mbuf queue |
hw.hn.trust_hostip |
Trust ip packet verification on host side, when csum info is missing (global setting) |
hw.hn.trust_hosttcp |
Trust tcp segment verification on host side, when csum info is missing (global setting) |
hw.hn.trust_hostudp |
Trust udp datagram verification on host side, when csum info is missing (global setting) |
hw.hn.tso_maxlen |
TSO burst limit |
hw.hn.tx_agg_pkts |
Packet transmission aggregation packet limit |
hw.hn.tx_agg_size |
Packet transmission aggregation size limit |
hw.hn.tx_chimney_size |
Chimney send packet size limit |
hw.hn.tx_ring_cnt |
# of TX rings to use |
hw.hn.tx_swq_depth |
Depth of IFQ or BUFRING |
hw.hn.tx_taskq_cnt |
# of TX taskqueues |
hw.hn.tx_taskq_mode |
TX taskqueue modes: 0 - independent, 1 - share global tx taskqs, 2 - share event taskqs |
hw.hn.udpcs_fixup |
# of UDP checksum fixup |
hw.hn.udpcs_fixup_mtu |
UDP checksum fixup MTU threshold |
hw.hn.use_if_start |
Use if_start TX method |
hw.hn.use_txdesc_bufring |
Use buf_ring for TX descriptors |
hw.hn.vf_transparent |
Transparent VF mod |
hw.hn.vf_xpnt_accbpf |
Accurate BPF for transparent VF |
hw.hn.vf_xpnt_attwait |
Extra wait for transparent VF attach routing; unit: seconds |
hw.hn.vflist |
VF list |
hw.hn.vfmap |
VF mapping |
hw.hv_vendor |
Hypervisor vendor |
hw.hvtimesync |
Hyper-V timesync interface |
hw.hvtimesync.ignore_sync |
Ignore the sync request. |
hw.hvtimesync.sample_thresh |
Threshold that makes sample request trigger the sync (unit: ms). |
hw.hvtimesync.sample_verbose |
Increase sample request verbosity. |
hw.i2c |
i2c controls |
hw.i2c.iicbb_debug |
Enable i2c bit-banging driver debug |
hw.iavf |
IAVF driver parameters |
hw.iavf.core_debug_mask |
Display debug statements that are printed in non-shared code |
hw.iavf.enable_head_writeback |
For detecting last completed TX descriptor by hardware, use value written by HW instead of checking descriptors. For 700 series VFs only. |
hw.iavf.rx_itr |
RX Interrupt Rate |
hw.iavf.shared_debug_mask |
Display debug statements that are printed in shared code |
hw.iavf.tx_itr |
TX Interrupt Rate |
hw.ibrs_active |
Indirect Branch Restricted Speculation active |
hw.ibrs_disable |
Disable Indirect Branch Restricted Speculation |
hw.ice |
ICE driver parameters |
hw.ice.debug |
ICE driver debug parameters |
hw.ice.debug.enable_tx_fc_filter |
Drop Ethertype 0x8808 control frames originating from non-HW sources |
hw.ice.debug.enable_tx_lldp_filter |
Drop Ethertype 0x88cc LLDP frames originating from non-HW sources |
hw.ice.debug.tx_balance_en |
Enable 5-layer scheduler topology |
hw.ice.enable_health_events |
Enable FW health event reporting globally |
hw.ice.irdma |
Enable iRDMA client interface |
hw.ice.rdma_max_msix |
Maximum number of MSI-X vectors to reserve per RDMA interface |
hw.igc |
igc driver parameters |
hw.igc.disable_crc_stripping |
Disable CRC Stripping |
hw.igc.eee_setting |
Enable Energy Efficient Ethernet |
hw.igc.enable_aim |
Enable adaptive interrupt moderation (1=normal, 2=lowlatency) |
hw.igc.max_interrupt_rate |
Maximum interrupts per second |
hw.igc.rx_abs_int_delay |
Default receive interrupt delay limit in usecs |
hw.igc.rx_int_delay |
Default receive interrupt delay in usecs |
hw.igc.rx_process_limit |
Maximum number of received packets to process at a time, -1 means unlimited |
hw.igc.sbp |
Show bad packets in promiscuous mode |
hw.igc.smart_pwr_down |
Set to true to leave smart power down enabled on newer adapters |
hw.igc.tx_abs_int_delay |
Default transmit interrupt delay limit in usecs |
hw.igc.tx_int_delay |
Default transmit interrupt delay in usecs |
hw.instruction_sse |
SIMD/MMX2 instructions available in CPU |
hw.intel_graphics_stolen_base |
Base address of the intel graphics stolen memory. |
hw.intel_graphics_stolen_size |
Size of the intel graphics stolen memory. |
hw.intr_epoch_batch |
Maximum interrupt handler executions without re-entering epoch(9) |
hw.intr_hwpmc_waiting_report_threshold |
Threshold for reporting number of events in a workq |
hw.intr_storm_threshold |
Number of consecutive interrupts before storm protection is enabled |
hw.intrbalance |
Interrupt auto-balance interval (seconds). Zero disables. |
hw.intrcnt |
Interrupt Counts |
hw.intrnames |
Interrupt Names |
hw.intrs |
interrupt:number @cpu: count |
hw.ioat |
ioat node |
hw.ioat.channels |
Number of IOAT channels attached |
hw.ioat.debug_level |
Set log level (0-3) for ioat(4). Higher is more verbose. |
hw.ioat.enable_ioat_test |
Non-zero: Enable the /dev/ioat_test device |
hw.ioat.force_legacy_interrupts |
Set to non-zero to force MSI-X disabled |
hw.ioat.ring_order |
Set IOAT ring order. (1 << this) == ring size. |
hw.iommu |
|
hw.iommu.batch_coalesce |
Number of qi batches between interrupt |
hw.iommu.check_free |
Check the GPA RBtree for free_down and free_after validity |
hw.iommu.dmar |
|
hw.iommu.dmar.batch_coalesce |
Number of qi batches between interrupt |
hw.iommu.dmar.tbl_pagecnt |
Count of pages used for DMAR pagetables |
hw.iommu.dmar.timeout |
Timeout for command wait, in nanoseconds |
hw.iommu.tbl_pagecnt |
Count of pages used for IOMMU pagetables |
hw.ix |
IXGBE driver parameters |
hw.ix.advertise_speed |
Default advertised speed for all adapters |
hw.ix.enable_aim |
Enable adaptive interrupt moderation |
hw.ix.enable_fdir |
Enable Flow Director |
hw.ix.enable_msix |
Enable MSI-X interrupts |
hw.ix.enable_rss |
Enable Receive-Side Scaling (RSS) |
hw.ix.flow_control |
Default flow control used for all adapters |
hw.ix.max_interrupt_rate |
Maximum interrupts per second |
hw.ix.unsupported_sfp |
Allow unsupported SFP modules...use at your own risk |
hw.ixl |
ixl driver parameters |
hw.ixl.core_debug_mask |
Display debug statements that are printed in non-shared code |
hw.ixl.enable_head_writeback |
For detecting last completed TX descriptor by hardware, use value written by HW instead of checking descriptors |
hw.ixl.enable_vf_loopback |
Determines mode that embedded device switch will use when SR-IOV is initialized:
0 - Disable (VEPA)
1 - Enable (VEB)
Enabling this will allow VFs in separate VMs to communicate over the hardware bridge. |
hw.ixl.flow_control |
Initial Flow Control setting |
hw.ixl.i2c_access_method |
I2C access method that driver will use:
0 - best available method
1 - bit bang via I2CPARAMS register
2 - register read/write via I2CCMD register
3 - Use Admin Queue command (best)
Using the Admin Queue is only supported on 710 devices with FW version 1.7 or higher |
hw.ixl.rx_itr |
RX Interrupt Rate |
hw.ixl.shared_debug_mask |
Display debug statements that are printed in shared code |
hw.ixl.tx_itr |
TX Interrupt Rate |
hw.kbd |
kbd |
hw.kbd.keymap_restrict_change |
restrict ability to change keymap |
hw.lower_amd64_sharedpage |
Lower sharedpage to work around Ryzen issue with executing code near the top of user memory |
hw.machine |
Machine class |
hw.machine_arch |
System architecture |
hw.malo |
Marvell 88w8335 driver parameters |
hw.malo.pci |
Marvell 88W8335 driver PCI parameters |
hw.malo.pci.msi_disable |
MSI disabled |
hw.malo.rxbuf |
rx buffers allocated |
hw.malo.rxquota |
max rx buffers to process per interrupt |
hw.malo.txbuf |
tx buffers allocated |
hw.malo.txcoalesce |
tx buffers to send at once |
hw.mca |
Machine Check Architecture |
hw.mca.amd10h_L1TP |
Administrative toggle for logging of level one TLB parity (L1TP) errors |
hw.mca.cmc_throttle |
Interval in seconds to throttle corrected MC interrupts |
hw.mca.count |
Record count |
hw.mca.enabled |
Administrative toggle for machine check support |
hw.mca.erratum383 |
Is the workaround for Erratum 383 on AMD Family 10h processors enabled? |
hw.mca.force_scan |
Force an immediate scan for machine checks |
hw.mca.intel6h_HSD131 |
Administrative toggle for logging of spurious corrected errors |
hw.mca.interval |
Periodic interval in seconds to scan for machine checks |
hw.mca.log_corrected |
Log corrected errors to the console |
hw.mca.maxcount |
Maximum record count (-1 is unlimited) |
hw.mca.records |
Machine check records |
hw.mds_disable |
Microarchitectural Data Sampling Mitigation (0 - off, 1 - on VERW, 2 - on SW, 3 - on AUTO) |
hw.mds_disable_state |
Microarchitectural Data Sampling Mitigation state |
hw.mfi |
MFI driver parameters |
hw.mfi.cmd_timeout |
Command timeout (in seconds) |
hw.mfi.detect_jbod_change |
Detect a change to a JBOD |
hw.mfi.event_class |
event message class |
hw.mfi.event_locale |
event message locale |
hw.mfi.max_cmds |
Max commands limit (-1 = controller limit) |
hw.mfi.mrsas_enable |
Allow mrsas to take newer cards |
hw.mfi.msi |
Enable use of MSI interrupts |
hw.mfi.polled_cmd_timeout |
Polled command timeout - used for firmware flash etc (in seconds) |
hw.midi |
Midi driver |
hw.midi.debug |
|
hw.midi.dumpraw |
|
hw.midi.instroff |
|
hw.midi.seq |
Midi sequencer |
hw.midi.seq.debug |
|
hw.midi.stat |
Status device |
hw.midi.stat.verbose |
|
hw.mlx5 |
mlx5 hardware controls |
hw.mlx5.auto_fw_update |
Allow automatic firmware update on driver start |
hw.mlx5.calibr |
MLX5 timestamp calibration parameters |
hw.mlx5.calibr.duration |
Duration of initial calibration |
hw.mlx5.calibr.fast |
Recalibration interval during initial calibration |
hw.mlx5.calibr.normal |
Recalibration interval during normal operations |
hw.mlx5.comp_eq_size |
Set default completion EQ size between 1024 and 16384 inclusivly. Value should be power of two. |
hw.mlx5.debug_mask |
debug mask: 1 = dump cmd data, 2 = dump cmd exec time, 3 = both. Default=0 |
hw.mlx5.fast_unload_enabled |
Set to enable fast unload. Clear to disable. |
hw.mlx5.fw_dump_enable |
Enable fw dump setup and op |
hw.mlx5.fw_reset_enable |
Enable firmware reset |
hw.mlx5.prof_sel |
profile selector. Valid range 0 - 2 |
hw.mlx5.relaxed_ordering_write |
Set to enable relaxed ordering for PCIe writes |
hw.mlx5.sw_reset_timeout |
Minimum timeout in seconds between two firmware resets |
hw.mmc |
mmc driver |
hw.mmc.debug |
Debug level |
hw.mmcsd |
mmcsd driver |
hw.mmcsd.cache |
Device R/W cache enabled if present |
hw.model |
Machine model |
hw.mpi3mr |
MPI3MR Driver Parameters |
hw.mpr |
MPR Driver Parameters |
hw.mps |
MPS Driver Parameters |
hw.mrsas |
MRSAS Driver Parameters |
hw.mwl |
Marvell driver parameters |
hw.mwl.hal |
Marvell HAL parameters |
hw.mwl.rxbuf |
rx buffers allocated |
hw.mwl.rxdesc |
rx descriptors allocated |
hw.mwl.rxdmalow |
min free rx buffers before restarting traffic |
hw.mwl.rxquota |
max rx buffers to process per interrupt |
hw.mwl.txbuf |
tx buffers allocated |
hw.mwl.txcoalesce |
tx buffers to send at once |
hw.ncpu |
Number of active CPUs |
hw.nvd |
nvd driver parameters |
hw.nvd.delete_max |
nvd maximum BIO_DELETE size in bytes |
hw.nvidia |
NVIDIA SYSCTL Parent Node |
hw.nvidia.gpus |
NVIDIA SYSCTL GPUs Node |
hw.nvidia.gpus.[num] |
NVIDIA SYSCTL GPU Node |
hw.nvidia.gpus.[num].firmware |
NVIDIA GPU Firmware Version |
hw.nvidia.gpus.[num].irq |
NVIDIA GPU IRQ Number |
hw.nvidia.gpus.[num].model |
NVIDIA GPU Model Name |
hw.nvidia.gpus.[num].type |
NVIDIA GPU Bus Type |
hw.nvidia.gpus.[num].uuid |
NVIDIA GPU UUID |
hw.nvidia.gpus.[num].vbios |
NVIDIA GPU VBIOS Version |
hw.nvidia.registry |
NVIDIA SYSCTL Registry Node |
hw.nvidia.registry.CreateImexChannel0 |
|
hw.nvidia.registry.DeviceFileGID |
|
hw.nvidia.registry.DeviceFileMode |
|
hw.nvidia.registry.DeviceFileUID |
|
hw.nvidia.registry.DmaRemapPeerMmio |
|
hw.nvidia.registry.DynamicPowerManagement |
|
hw.nvidia.registry.DynamicPowerManagementVideoMemoryThreshold |
|
hw.nvidia.registry.EnableDbgBreakpoint |
|
hw.nvidia.registry.EnableGpuFirmware |
|
hw.nvidia.registry.EnableGpuFirmwareLogs |
|
hw.nvidia.registry.EnableMSI |
|
hw.nvidia.registry.EnablePCIERelaxedOrderingMode |
|
hw.nvidia.registry.EnablePCIeGen3 |
|
hw.nvidia.registry.EnableResizableBar |
|
hw.nvidia.registry.EnableS0ixPowerManagement |
|
hw.nvidia.registry.EnableStreamMemOPs |
|
hw.nvidia.registry.EnableUserNUMAManagement |
|
hw.nvidia.registry.GrdmaPciTopoCheckOverride |
|
hw.nvidia.registry.IgnoreMMIOCheck |
|
hw.nvidia.registry.ImexChannelCount |
|
hw.nvidia.registry.InitializeSystemMemoryAllocations |
|
hw.nvidia.registry.KMallocHeapMaxSize |
|
hw.nvidia.registry.MemoryPoolSize |
|
hw.nvidia.registry.ModifyDeviceFiles |
|
hw.nvidia.registry.NvLinkDisable |
|
hw.nvidia.registry.OpenRmEnableUnsupportedGpus |
|
hw.nvidia.registry.PreserveVideoMemoryAllocations |
|
hw.nvidia.registry.RegisterPCIDriver |
|
hw.nvidia.registry.ResmanDebugLevel |
|
hw.nvidia.registry.RmLogonRC |
|
hw.nvidia.registry.RmNvlinkBandwidthLinkCount |
|
hw.nvidia.registry.RmProfilingAdminOnly |
|
hw.nvidia.registry.S0ixPowerManagementVideoMemoryThreshold |
|
hw.nvidia.registry.TCEBypassMode |
|
hw.nvidia.registry.UsePageAttributeTable |
|
hw.nvidia.registry.VMallocHeapMaxSize |
|
hw.nvidia.registry.dwords |
|
hw.nvidia.version |
NVIDIA Resource Manager (NVRM) Version |
hw.nvidiadrm |
nvidia-drm kernel module parameters |
hw.nvidiadrm.fbdev |
Enable atomic kernel modesetting (1 = enable (default), 0 = disable |
hw.nvidiadrm.modeset |
Enable atomic kernel modesetting (1 = enable, 0 = disable (default)) |
hw.nvme |
NVMe sysctl tunables |
hw.nvme.use_nvd |
1 = Create NVD devices, 0 = Create NDA devices |
hw.nvme.verbose_cmd_dump |
enable verbose command printing when a command fails |
hw.pagesize |
System memory page size |
hw.pagesizes |
Supported page sizes |
hw.pci |
PCI bus tuning parameters |
hw.pci.clear_aer_on_attach |
Clear port and device AER state on driver attach |
hw.pci.clear_bars |
Ignore firmware-assigned resources for BARs. |
hw.pci.clear_buses |
Ignore firmware-assigned bus numbers. |
hw.pci.clear_pcib |
Clear firmware-assigned resources for PCI-PCI bridge I/O windows. |
hw.pci.default_vgapci_unit |
Default VGA-compatible display |
hw.pci.do_power_nodriver |
Place a function into D3 state when no driver attaches to it. 0 means disable. 1 means conservatively place function into D3 state. 2 means aggressively place function into D3 state. 3 means put absolutely everything in D3 state. |
hw.pci.do_power_resume |
Transition from D3 -> D0 on resume. |
hw.pci.do_power_suspend |
Transition from D0 -> D3 on suspend. |
hw.pci.enable_ari |
Enable support for PCIe Alternative RID Interpretation |
hw.pci.enable_aspm |
Enable support for PCIe Active State Power Management |
hw.pci.enable_io_modes |
Enable I/O and memory bits in the config register. Some BIOSes do not enable these bits correctly. We'd like to do this all the time, but there are some peripherals that this causes problems with. |
hw.pci.enable_mps_tune |
Enable tuning of MPS(maximum payload size). |
hw.pci.enable_msi |
Enable support for MSI interrupts |
hw.pci.enable_msix |
Enable support for MSI-X interrupts |
hw.pci.enable_pcie_ei |
Enable support for PCI-express Electromechanical Interlock. |
hw.pci.enable_pcie_hp |
Enable support for native PCI-express HotPlug. |
hw.pci.honor_msi_blacklist |
Honor chipset blacklist for MSI/MSI-X |
hw.pci.host_mem_start |
Limit the host bridge memory to being above this address. |
hw.pci.intx_reroute |
Re-route INTx interrupts when scanning devices |
hw.pci.iov_max_config |
Maximum allowed size of SR-IOV configuration. |
hw.pci.mcfg |
Enable support for PCI-e memory mapped config access |
hw.pci.msix_rewrite_table |
Rewrite entire MSI-X table when updating MSI-X entries |
hw.pci.pcie_hp_detach_timeout |
Attention Button delay for PCI-express Eject. |
hw.pci.realloc_bars |
Attempt to allocate a new range for any BARs whose original firmware-assigned ranges fail to allocate during the initial device scan. |
hw.pci.usb_early_takeover |
Enable early takeover of USB controllers. Disable this if you depend on BIOS emulation of USB devices, that is you use USB devices (like keyboard or mouse) but do not load USB drivers |
hw.physmem |
Amount of physical memory (in bytes) |
hw.psm |
ps/2 mouse |
hw.psm.elantech_support |
Enable support for Elantech touchpads |
hw.psm.mux_disabled |
Disable active multiplexing |
hw.psm.synaptics_support |
Enable support for Synaptics touchpads |
hw.psm.tap_enabled |
Enable tap and drag gestures |
hw.psm.tap_threshold |
Button tap threshold |
hw.psm.tap_timeout |
Tap timeout for touchpads |
hw.psm.trackpoint_support |
Enable support for IBM/Lenovo TrackPoint |
hw.puc |
puc(4) driver configuration |
hw.puc.msi_disable |
Disable use of MSI interrupts by puc(9) |
hw.realmem |
Amount of memory (in bytes) reported by the firmware |
hw.sdhci |
sdhci driver |
hw.sdhci.debug |
Debug level |
hw.sdhci.enable_msi |
Enable MSI interrupts |
hw.sdhci.quirk_clear |
Mask of quirks to clear |
hw.sdhci.quirk_set |
Mask of quirks to set |
hw.snd |
Sound driver |
hw.snd.basename_clone |
DSP basename cloning (0: Disable; 1: Enabled) |
hw.snd.compat_linux_mmap |
linux mmap compatibility (-1=force disable 0=auto 1=force enable) |
hw.snd.default_auto |
assign default unit to a newly attached device |
hw.snd.default_unit |
default sound device |
hw.snd.feeder_eq_exact_rate |
force exact rate validation |
hw.snd.feeder_eq_presets |
compile-time eq presets |
hw.snd.feeder_rate_max |
maximum allowable rate |
hw.snd.feeder_rate_min |
minimum allowable rate |
hw.snd.feeder_rate_polyphase_max |
maximum allowable polyphase entries |
hw.snd.feeder_rate_presets |
compile-time rate presets |
hw.snd.feeder_rate_quality |
sample rate converter quality (0=low .. 4=high) |
hw.snd.feeder_rate_round |
sample rate converter rounding threshold |
hw.snd.latency |
buffering latency (0=low ... 10=high) |
hw.snd.latency_profile |
buffering latency profile (0=aggressive 1=safe) |
hw.snd.maxautovchans |
maximum virtual channel |
hw.snd.report_soft_formats |
report software-emulated formats |
hw.snd.report_soft_matrix |
report software-emulated channel matrixing |
hw.snd.syncdelay |
append (0-1000) millisecond trailing buffer delay on each sync |
hw.snd.timeout |
interrupt timeout (1 - 10) seconds |
hw.snd.usefrags |
prefer setfragments() over setblocksize() |
hw.snd.vchans_enable |
global virtual channel switch |
hw.snd.verbose |
verbosity level |
hw.snd.version |
driver version/arch |
hw.snd.vpc_0db |
0db relative level |
hw.snd.vpc_autoreset |
automatically reset channels volume to 0db |
hw.snd.vpc_mixer_bypass |
control channel pcm/rec volume, bypassing real mixer device |
hw.snd.vpc_reset |
reset volume on all channels |
hw.spec_store_bypass_disable |
Speculative Store Bypass Disable (0 - off, 1 - on, 2 - auto) |
hw.spec_store_bypass_disable_active |
Speculative Store Bypass Disable active |
hw.storvsc |
Hyper-V storage interface |
hw.storvsc.chan_cnt |
# of channels to use |
hw.storvsc.max_io |
Hyper-V storage max io limit |
hw.storvsc.ringbuffer_size |
Hyper-V storage ringbuffer size |
hw.storvsc.use_pim_unmapped |
Optimize storvsc by using unmapped I/O |
hw.storvsc.use_win8ext_flags |
Use win8 extension flags or not |
hw.syscons |
syscons |
hw.syscons.bell |
enable bell |
hw.syscons.kbd_debug |
enable keyboard debug |
hw.syscons.kbd_reboot |
enable keyboard reboot |
hw.syscons.saver |
saver |
hw.syscons.saver.keybonly |
screen saver interrupted by input only |
hw.syscons.sc_no_suspend_vtswitch |
Disable VT switch before suspend. |
hw.ttm |
TTM memory manager parameters |
hw.ttm.dma32_pages_limit |
Limit for the allocated DMA32 pages |
hw.ttm.page_pool_size |
Number of pages in the WC/UC/DMA pool |
hw.ttm.pages_limit |
Limit for the allocated pages |
hw.uart_noise_threshold |
Number of UART RX interrupts where TX is not ready, before data is discarded |
hw.usb |
USB debugging |
hw.usb.ctrl |
USB controller |
hw.usb.ctrl.debug |
Debug level |
hw.usb.debug |
Debug level |
hw.usb.dev |
USB device |
hw.usb.dev.debug |
Debug Level |
hw.usb.disable_enumeration |
Set to disable all USB device enumeration. This can secure against USB devices turning evil, for example a USB memory stick becoming a USB keyboard. |
hw.usb.disable_port_power |
Set to disable all USB port power. |
hw.usb.ehci |
USB ehci |
hw.usb.ehci.debug |
Debug level |
hw.usb.ehci.iaadbug |
Enable doorbell bug workaround |
hw.usb.ehci.lostintrbug |
Enable lost interrupt bug workaround |
hw.usb.ehci.no_hs |
Disable High Speed USB |
hw.usb.full_ddesc |
USB always read complete device descriptor, if set |
hw.usb.no_boot_wait |
No USB device enumerate waiting at boot. |
hw.usb.no_cs_fail |
USB clear stall failures are ignored, if set |
hw.usb.no_shutdown_wait |
No USB device waiting at system shutdown. |
hw.usb.no_suspend_wait |
No USB device waiting at system suspend. |
hw.usb.ohci |
USB ohci |
hw.usb.ohci.debug |
ohci debug level |
hw.usb.power_timeout |
USB power timeout |
hw.usb.proc |
USB process |
hw.usb.proc.debug |
Debug level |
hw.usb.template |
Selected USB device side template |
hw.usb.timings |
Timings |
hw.usb.timings.enum_nice_time |
Enumeration thread nice time |
hw.usb.timings.extra_power_up_time |
Extra PowerUp Time |
hw.usb.timings.port_powerup_delay |
Port PowerUp Delay |
hw.usb.timings.port_reset_delay |
Port Reset Delay |
hw.usb.timings.port_reset_recovery |
Port Reset Recovery |
hw.usb.timings.port_resume_delay |
Port Resume Delay |
hw.usb.timings.port_root_reset_delay |
Root Port Reset Delay |
hw.usb.timings.resume_delay |
Resume Delay |
hw.usb.timings.resume_recovery |
Resume Recovery |
hw.usb.timings.resume_wait |
Resume Wait |
hw.usb.timings.set_address_settle |
Set Address Settle |
hw.usb.uaudio |
USB uaudio |
hw.usb.uaudio.buffer_ms |
uaudio buffering delay in milliseconds, from 1 to 8 |
hw.usb.uaudio.default_bits |
uaudio default sample bits |
hw.usb.uaudio.default_channels |
uaudio default sample channels |
hw.usb.uaudio.default_rate |
uaudio default sample rate |
hw.usb.uaudio.handle_hid |
uaudio handles any HID volume/mute keys, if set |
hw.usb.ucom |
USB ucom |
hw.usb.ucom.cons_baud |
console baud rate |
hw.usb.ucom.cons_subunit |
console subunit number |
hw.usb.ucom.cons_unit |
console unit number |
hw.usb.ucom.debug |
ucom debug level |
hw.usb.ucom.device_mode_console |
set to 1 to mark terminals as consoles when in device mode |
hw.usb.ucom.pps_mode |
pulse capture mode: 0/1/2=disabled/CTS/DCD; add 0x10 to invert |
hw.usb.ugen |
USB generic |
hw.usb.ugen.debug |
Debug level |
hw.usb.uhci |
USB uhci |
hw.usb.uhci.debug |
uhci debug level |
hw.usb.uhci.loop |
uhci noloop |
hw.usb.uhid |
USB uhid |
hw.usb.uhid.debug |
Debug level |
hw.usb.uhub |
USB HUB |
hw.usb.uhub.debug |
Debug level |
hw.usb.ukbd |
USB keyboard |
hw.usb.ukbd.debug |
Debug level |
hw.usb.ukbd.no_leds |
Disables setting of keyboard leds |
hw.usb.ukbd.pollrate |
Force this polling rate, 1-1000Hz |
hw.usb.ulpt |
USB ulpt |
hw.usb.ulpt.debug |
Debug level |
hw.usb.umass |
USB umass |
hw.usb.umass.debug |
umass debug level |
hw.usb.umass.throttle |
Forced delay between commands in milliseconds |
hw.usb.umodem |
USB umodem |
hw.usb.umodem.debug |
Debug level |
hw.usb.ums |
USB ums |
hw.usb.ums.debug |
Debug level |
hw.usb.usb_lang_id |
Preferred USB language ID |
hw.usb.usb_lang_mask |
Preferred USB language mask |
hw.usb.usbhid |
USB usbhid |
hw.usb.usbhid.debug |
Debug level |
hw.usb.usbhid.enable |
Enable usbhid and prefer it to other USB HID drivers |
hw.usb.wmt |
USB MSWindows 7/8/10 compatible Multi-touch Device |
hw.usb.wmt.debug |
Debug level |
hw.usb.wmt.timestamps |
Enable hardware timestamp reporting |
hw.usb.xhci |
USB XHCI |
hw.usb.xhci.ctlquirk |
Set to enable control endpoint quirk |
hw.usb.xhci.ctlstep |
Set to enable control endpoint status stage stepping |
hw.usb.xhci.dcepquirk |
Set to disable endpoint deconfigure command |
hw.usb.xhci.debug |
Debug level |
hw.usb.xhci.dma32 |
Set to only use 32-bit DMA for the XHCI controller |
hw.usb.xhci.streams |
Set to enable streams mode support |
hw.usb.xhci.use_polling |
Set to enable software interrupt polling for the XHCI controller |
hw.usb.xhci.xhci_port_route |
Routing bitmap for switching EHCI ports to the XHCI controller |
hw.usermem |
Amount of memory (in bytes) which is not wired |
hw.via_feature_rng |
VIA RNG feature available in CPU |
hw.via_feature_xcrypt |
VIA xcrypt feature available in CPU |
hw.vmbus |
Hyper-V vmbus |
hw.vmbus.pin_evttask |
Pin event tasks to their respective CPU |
hw.vmbus.tlb_hcall |
Use Hyper_V hyercall for tlb flush |
hw.vmd |
Intel Volume Management Device tuning parameters |
hw.vmd.bypass_msi |
Bypass MSI remapping on capable hardware |
hw.vmd.max_msi |
Maximum number of MSI vectors per device |
hw.vmd.max_msix |
Maximum number of MSI-X vectors per device |
hw.vmm |
|
hw.vmm.amdvi |
|
hw.vmm.amdvi.count |
|
hw.vmm.amdvi.disable_io_fault |
|
hw.vmm.amdvi.domain_id |
|
hw.vmm.amdvi.enable |
|
hw.vmm.amdvi.host_ptp |
|
hw.vmm.amdvi.ptp_level |
|
hw.vmm.bhyve_xcpuids |
Number of times an unknown cpuid leaf was accessed |
hw.vmm.create |
|
hw.vmm.destroy |
|
hw.vmm.ept |
|
hw.vmm.ept.pmap_flags |
|
hw.vmm.halt_detection |
Halt VM if all vcpus execute HLT with interrupts disabled |
hw.vmm.iommu |
bhyve iommu parameters |
hw.vmm.iommu.enable |
Enable use of I/O MMU (required for PCI passthrough). |
hw.vmm.iommu.initialized |
bhyve iommu initialized? |
hw.vmm.ipinum |
IPI vector used for vcpu notifications |
hw.vmm.maxcpu |
Maximum number of vCPUs |
hw.vmm.npt |
|
hw.vmm.npt.pmap_flags |
|
hw.vmm.ppt |
bhyve passthru devices |
hw.vmm.ppt.devices |
number of pci passthru devices |
hw.vmm.svm |
|
hw.vmm.svm.disable_npf_assist |
|
hw.vmm.svm.features |
SVM features advertised by CPUID.8000000AH:EDX |
hw.vmm.svm.num_asids |
Number of ASIDs supported by this processor |
hw.vmm.svm.vmcb_clean |
|
hw.vmm.topology |
|
hw.vmm.topology.cpuid_leaf_b |
|
hw.vmm.trace_guest_exceptions |
Trap into hypervisor on all guest exceptions and reflect them back |
hw.vmm.trap_wbinvd |
WBINVD triggers a VM-exit |
hw.vmm.vmx |
|
hw.vmm.vmx.cap |
|
hw.vmm.vmx.cap.halt_exit |
HLT triggers a VM-exit |
hw.vmm.vmx.cap.invpcid |
Guests are allowed to use INVPCID |
hw.vmm.vmx.cap.monitor_trap |
Monitor trap flag |
hw.vmm.vmx.cap.pause_exit |
PAUSE triggers a VM-exit |
hw.vmm.vmx.cap.posted_interrupts |
APICv posted interrupt support |
hw.vmm.vmx.cap.rdpid |
Guests are allowed to use RDPID |
hw.vmm.vmx.cap.rdtscp |
Guests are allowed to use RDTSCP |
hw.vmm.vmx.cap.tpr_shadowing |
TPR shadowing support |
hw.vmm.vmx.cap.unrestricted_guest |
Unrestricted guests |
hw.vmm.vmx.cap.virtual_interrupt_delivery |
APICv virtual interrupt delivery support |
hw.vmm.vmx.cap.wbinvd_exit |
WBINVD triggers a VM-exit |
hw.vmm.vmx.cr0_ones_mask |
|
hw.vmm.vmx.cr0_zeros_mask |
|
hw.vmm.vmx.cr4_ones_mask |
|
hw.vmm.vmx.cr4_zeros_mask |
|
hw.vmm.vmx.initialized |
Intel VMX initialized |
hw.vmm.vmx.l1d_flush |
|
hw.vmm.vmx.l1d_flush_sw |
|
hw.vmm.vmx.no_flush_rsb |
Do not flush RSB upon vmexit |
hw.vmm.vmx.posted_interrupt_vector |
APICv posted interrupt vector |
hw.vmm.vmx.vpid_alloc_failed |
|
hw.vmm.vrtc |
|
hw.vmm.vrtc.flag_broken_time |
Stop guest when invalid RTC time is detected |
hw.vtnet |
VirtIO Net driver parameters |
hw.vtnet.altq_disable |
Disables ALTQ Support |
hw.vtnet.csum_disable |
Disables receive and send checksum offload |
hw.vtnet.fixup_needs_csum |
Calculate valid checksum for NEEDS_CSUM packets |
hw.vtnet.lro_disable |
Disables hardware LRO |
hw.vtnet.lro_entry_count |
Software LRO entry count |
hw.vtnet.lro_mbufq_depth |
Depth of software LRO mbuf queue |
hw.vtnet.mq_disable |
Disables multiqueue support |
hw.vtnet.mq_max_pairs |
Maximum number of multiqueue pairs |
hw.vtnet.rx_process_limit |
Number of RX segments processed in one pass |
hw.vtnet.tso_disable |
Disables TSO |
hw.vtnet.tso_maxlen |
TSO burst limit |
hw.watchdog |
Main watchdog device |
hw.watchdog.wd_last_u |
Watchdog last update time |
hw.watchdog.wd_last_u_secs |
Watchdog last update time |
hw.xbd |
xbd driver parameters |
hw.xbd.xbd_enable_indirect |
Enable xbd indirect segments |
kern |
High kernel, proc, limits &c |
kern.acct_chkfreq |
frequency for checking the free space |
kern.acct_configured |
Accounting configured or not |
kern.acct_resume |
percentage of free disk space above which accounting resumes |
kern.acct_suspend |
percentage of free disk space below which accounting stops |
kern.acct_suspended |
Accounting suspended or not |
kern.always_console_output |
Always output to console despite TIOCCONS |
kern.arandom |
arc4rand |
kern.argmax |
Maximum bytes of argument to execve(2) |
kern.base_address |
Kernel base address |
kern.bio_transient_maxcnt |
Maximum number of transient BIOs mappings |
kern.boot_id |
Random boot ID |
kern.boot_tag |
Tag added to dmesg at start of boot |
kern.bootfile |
Name of kernel file booted |
kern.boottime |
Estimated system boottime |
kern.boottrace |
boottrace statistics |
kern.boottrace.boottrace |
Capture a boot-time trace event |
kern.boottrace.enabled |
Boot-time and shutdown-time tracing enabled |
kern.boottrace.log |
Print a log of the boottrace trace data |
kern.boottrace.reset |
Reset run-time tracing table |
kern.boottrace.runtrace |
Capture a run-time trace event |
kern.boottrace.shutdown_trace |
Enable kernel shutdown tracing to the console |
kern.boottrace.shutdown_trace_threshold |
Tracing threshold (ms) below which tracing is ignored |
kern.boottrace.shuttrace |
Capture a shutdown-time trace event |
kern.boottrace.table_size |
Boot-time tracing table size |
kern.build_id |
Operating system build-id |
kern.callout_stat |
Dump immediate statistic snapshot of the scheduled callouts |
kern.cam |
CAM Subsystem |
kern.cam.ada |
CAM Direct Access Disk driver |
kern.cam.ada.[num] |
CAM ADA unit 1 |
kern.cam.ada.[num].delete_method |
BIO_DELETE execution method |
kern.cam.ada.[num].flags |
Flags for drive |
kern.cam.ada.[num].max_seq_zones |
Maximum Number of Open Sequential Write Required Zones |
kern.cam.ada.[num].optimal_nonseq_zones |
Optimal Number of Non-Sequentially Written Sequential Write Preferred Zones |
kern.cam.ada.[num].optimal_seq_zones |
Optimal Number of Open Sequential Write Preferred Zones |
kern.cam.ada.[num].read_ahead |
Enable disk read ahead. |
kern.cam.ada.[num].rotating |
Rotating media. This sysctl is *DEPRECATED*, gone in FreeBSD 15 |
kern.cam.ada.[num].sort_io_queue |
Sort IO queue to try and optimise disk access patterns |
kern.cam.ada.[num].trim_count |
Total number of dsm commands sent |
kern.cam.ada.[num].trim_goal |
Number of trims to try to accumulate before sending to hardware |
kern.cam.ada.[num].trim_lbas |
Total lbas in the dsm commands sent |
kern.cam.ada.[num].trim_ranges |
Total number of ranges in dsm commands |
kern.cam.ada.[num].trim_ticks |
IO Schedul qaunta to hold back trims for when accumulating |
kern.cam.ada.[num].unmapped_io |
Use unmapped I/O. This sysctl is *DEPRECATED*, gone in FreeBSD 15 |
kern.cam.ada.[num].write_cache |
Enable disk write cache. |
kern.cam.ada.[num].zone_mode |
Zone Mode |
kern.cam.ada.[num].zone_support |
Zone Support |
kern.cam.ada.default_timeout |
Normal I/O timeout (in seconds) |
kern.cam.ada.enable_biospeedup |
Enable BIO_SPEEDUP processing |
kern.cam.ada.enable_uma_ccbs |
Use UMA for CCBs |
kern.cam.ada.read_ahead |
Enable disk read-ahead |
kern.cam.ada.retry_count |
Normal I/O retry count |
kern.cam.ada.send_ordered |
Send Ordered Tags |
kern.cam.ada.spindown_shutdown |
Spin down upon shutdown |
kern.cam.ada.spindown_suspend |
Spin down upon suspend |
kern.cam.ada.write_cache |
Enable disk write cache |
kern.cam.announce_nosbuf |
Don't use sbuf for announcements |
kern.cam.boot_delay |
Bus registration wait time |
kern.cam.cam_srch_hi |
Search above LUN 7 for SCSI3 and greater devices |
kern.cam.cd |
CAM CDROM driver |
kern.cam.cd.[num] |
CAM CD unit 0 |
kern.cam.cd.[num].minimum_cmd_size |
Minimum CDB size |
kern.cam.cd.poll_period |
Media polling period in seconds |
kern.cam.cd.retry_count |
Normal I/O retry count |
kern.cam.cd.timeout |
Timeout, in us, for read operations |
kern.cam.da |
CAM Direct Access Disk driver |
kern.cam.da.[num] |
CAM DA unit 0 |
kern.cam.da.[num].delete_max |
Maximum BIO_DELETE size |
kern.cam.da.[num].delete_method |
BIO_DELETE execution method |
kern.cam.da.[num].error_inject |
error_inject leaf |
kern.cam.da.[num].flags |
Flags for drive |
kern.cam.da.[num].max_seq_zones |
Maximum Number of Open Sequential Write Required Zones |
kern.cam.da.[num].minimum_cmd_size |
Minimum CDB size |
kern.cam.da.[num].optimal_nonseq_zones |
Optimal Number of Non-Sequentially Written Sequential Write Preferred Zones |
kern.cam.da.[num].optimal_seq_zones |
Optimal Number of Open Sequential Write Preferred Zones |
kern.cam.da.[num].p_type |
DIF protection type |
kern.cam.da.[num].rotating |
Rotating media *DEPRECATED* gone in FreeBSD 15 |
kern.cam.da.[num].sort_io_queue |
Sort IO queue to try and optimise disk access patterns |
kern.cam.da.[num].trim_count |
Total number of unmap/dsm commands sent |
kern.cam.da.[num].trim_goal |
Number of trims to try to accumulate before sending to hardware |
kern.cam.da.[num].trim_lbas |
Total lbas in the unmap/dsm commands sent |
kern.cam.da.[num].trim_ranges |
Total number of ranges in unmap/dsm commands |
kern.cam.da.[num].trim_ticks |
IO Schedul qaunta to hold back trims for when accumulating |
kern.cam.da.[num].unmapped_io |
Unmapped I/O support *DEPRECATED* gone in FreeBSD 15 |
kern.cam.da.[num].zone_mode |
Zone Mode |
kern.cam.da.[num].zone_support |
Zone Support |
kern.cam.da.default_softtimeout |
Soft I/O timeout (ms) |
kern.cam.da.default_timeout |
Normal I/O timeout (in seconds) |
kern.cam.da.disable_wp_detection |
Disable detection of write-protected disks |
kern.cam.da.enable_biospeedup |
Enable BIO_SPEEDUP processing |
kern.cam.da.enable_uma_ccbs |
Use UMA for CCBs |
kern.cam.da.poll_period |
Media polling period in seconds |
kern.cam.da.retry_count |
Normal I/O retry count |
kern.cam.da.send_ordered |
Send Ordered Tags |
kern.cam.debug_delay |
Delay in us after each debug message |
kern.cam.dflags |
Enabled debug flags |
kern.cam.enc |
CAM Enclosure Services driver |
kern.cam.enc.emulate_array_devices |
Emulate Array Devices for SAF-TE |
kern.cam.enc.search_globally |
Search for disks on other buses |
kern.cam.enc.verbose |
Enable verbose logging |
kern.cam.iosched |
CAM I/O Scheduler parameters |
kern.cam.mapmem_thresh |
Threshold for user-space buffer mapping |
kern.cam.nda |
CAM Direct Access Disk driver |
kern.cam.nda.[num] |
CAM NDA unit 0 |
kern.cam.nda.[num].deletes |
Number of BIO_DELETE requests |
kern.cam.nda.[num].flags |
Flags for drive |
kern.cam.nda.[num].rotating |
Rotating media |
kern.cam.nda.[num].sort_io_queue |
Sort IO queue to try and optimise disk access patterns |
kern.cam.nda.[num].trim_count |
Total number of unmap/dsm commands sent |
kern.cam.nda.[num].trim_goal |
Number of trims to try to accumulate before sending to hardware |
kern.cam.nda.[num].trim_lbas |
Total lbas in the unmap/dsm commands sent |
kern.cam.nda.[num].trim_ranges |
Total number of ranges in unmap/dsm commands |
kern.cam.nda.[num].trim_ticks |
IO Schedul qaunta to hold back trims for when accumulating |
kern.cam.nda.[num].unmapped_io |
Unmapped I/O leaf |
kern.cam.nda.enable_biospeedup |
Enable BIO_SPEEDUP processing. |
kern.cam.nda.max_trim |
Maximum number of BIO_DELETE to send down as a DSM TRIM. |
kern.cam.nda.nvd_compat |
Enable creation of nvd aliases. |
kern.cam.num_doneqs |
Number of completion queues/threads |
kern.cam.pmp |
CAM Direct Access Disk driver |
kern.cam.pmp.default_timeout |
Normal I/O timeout (in seconds) |
kern.cam.pmp.hide_special |
Hide extra ports |
kern.cam.pmp.retry_count |
Normal I/O retry count |
kern.cam.sa |
CAM Sequential Access Tape Driver |
kern.cam.sa.allow_io_split |
Default I/O split value |
kern.cam.scsi_delay |
Delay to allow devices to settle after a SCSI bus reset (ms) |
kern.cam.sort_io_queues |
Sort IO queues to try and optimise disk access patterns |
kern.cam.xpt_generation |
CAM peripheral generation count |
kern.capmode_coredump |
Allow processes in capability mode to dump core |
kern.ccpu |
Decay factor used for updating %CPU in 4BSD scheduler |
kern.chroot_allow_open_directories |
Allow a process to chroot(2) if it has a directory open |
kern.clockrate |
Rate and period of various kernel clocks |
kern.compiler_version |
Version of compiler used to compile kernel |
kern.compress_user_cores |
Enable compression of user corefiles (1 = gzip, 2 = zstd) |
kern.compress_user_cores_level |
Corefile compression level |
kern.conftxt |
Kernel configuration file |
kern.consmsgbuf_size |
Console tty buffer size |
kern.consmute |
State of the console muting |
kern.console |
Console device control |
kern.constty_wakeups_per_second |
Times per second to check for pending console tty messages |
kern.core_dump_can_intr |
Core dumping interruptible with SIGKILL |
kern.coredump |
Enable/Disable coredumps |
kern.coredump_devctl |
Generate a devctl notification when processes coredump |
kern.coredump_pack_fileinfo |
Enable file path packing in 'procstat -f' coredump notes |
kern.coredump_pack_vmmapinfo |
Enable file path packing in 'procstat -v' coredump notes |
kern.corefile |
Process corefile name format string |
kern.cp_time |
CPU time statistics |
kern.cp_times |
per-CPU time statistics |
kern.crypto |
In-kernel cryptography |
kern.crypto.allow_soft |
Enable use of software crypto by /dev/crypto |
kern.crypto.cryptodev_separate_aad |
Use separate AAD buffer for /dev/crypto requests. |
kern.crypto.cryptodev_use_output |
Use separate output buffers for /dev/crypto requests. |
kern.crypto.num_workers |
Number of crypto workers used to dispatch crypto jobs |
kern.crypto.stats |
Crypto system statistics |
kern.crypto_workers_num |
Number of crypto workers used to dispatch crypto jobs |
kern.cryptodevallowsoft |
Enable/disable use of software crypto by /dev/crypto |
kern.devname |
devname(3) handler |
kern.devstat |
Device Statistics |
kern.devstat.all |
All devices in the devstat list |
kern.devstat.generation |
Devstat list generation |
kern.devstat.numdevs |
Number of devices in the devstat list |
kern.devstat.version |
Devstat list version number |
kern.dfldsiz |
Initial data size limit |
kern.dflssiz |
Initial stack size limit |
kern.dirdelay |
Time to delay syncing directories (in seconds) |
kern.disallow_high_osrel |
Disallow execution of binaries built for higher version of the world |
kern.disks |
names of available disks |
kern.domainname |
Name of the current YP/NIS domain |
kern.dummy |
|
kern.elf32 |
|
kern.elf32.allow_wx |
Allow pages to be mapped simultaneously writable and executable |
kern.elf32.aslr |
|
kern.elf32.aslr.enable |
ELF32: enable address map randomization |
kern.elf32.aslr.honor_sbrk |
ELF32: assume sbrk is used |
kern.elf32.aslr.pie_enable |
ELF32: enable address map randomization for PIE binaries |
kern.elf32.aslr.shared_page |
ELF32: enable shared page address randomization |
kern.elf32.aslr.stack |
ELF32: enable stack address randomization |
kern.elf32.fallback_brand |
ELF32 brand of last resort |
kern.elf32.nxstack |
ELF32: support PT_GNU_STACK for non-executable stack control |
kern.elf32.pie_base |
PIE load base without randomization |
kern.elf32.read_exec |
enable execution from readable segments |
kern.elf32.sigfastblock |
enable sigfastblock for new processes |
kern.elf32.vdso |
ELF32: enable vdso preloading |
kern.elf64 |
|
kern.elf64.allow_wx |
Allow pages to be mapped simultaneously writable and executable |
kern.elf64.aslr |
|
kern.elf64.aslr.enable |
ELF64: enable address map randomization |
kern.elf64.aslr.honor_sbrk |
ELF64: assume sbrk is used |
kern.elf64.aslr.pie_enable |
ELF64: enable address map randomization for PIE binaries |
kern.elf64.aslr.shared_page |
ELF64: enable shared page address randomization |
kern.elf64.aslr.stack |
ELF64: enable stack address randomization |
kern.elf64.fallback_brand |
ELF64 brand of last resort |
kern.elf64.nxstack |
ELF64: support PT_GNU_STACK for non-executable stack control |
kern.elf64.pie_base |
PIE load base without randomization |
kern.elf64.sigfastblock |
enable sigfastblock for new processes |
kern.elf64.vdso |
ELF64: enable vdso preloading |
kern.epoch |
epoch information |
kern.epoch.stats |
epoch stats |
kern.epoch.stats.epoch_call_tasks |
# of times a callback task was run |
kern.epoch.stats.epoch_calls |
# of times a callback was deferred |
kern.epoch.stats.migrations |
# of times thread was migrated to another CPU in epoch_wait |
kern.epoch.stats.nblocked |
# of times a thread was in an epoch when epoch_wait was called |
kern.epoch.stats.ncontended |
# of times a thread was blocked on a lock in an epoch during an epoch_wait |
kern.epoch.stats.switches |
# of times a thread voluntarily context switched in epoch_wait |
kern.evdev |
Evdev args |
kern.evdev.input |
Evdev input devices |
kern.evdev.input.[num] |
|
kern.evdev.input.[num].abs_bits |
Input device supported absolute events |
kern.evdev.input.[num].id |
Input device identification |
kern.evdev.input.[num].key_bits |
Input device supported keys |
kern.evdev.input.[num].led_bits |
Input device supported LED events |
kern.evdev.input.[num].msc_bits |
Input device supported miscellaneous events |
kern.evdev.input.[num].name |
Input device name |
kern.evdev.input.[num].phys |
Input device short name |
kern.evdev.input.[num].props |
Input device properties |
kern.evdev.input.[num].rel_bits |
Input device supported relative events |
kern.evdev.input.[num].snd_bits |
Input device supported sound events |
kern.evdev.input.[num].sw_bits |
Input device supported switch events |
kern.evdev.input.[num].type_bits |
Input device supported events types |
kern.evdev.input.[num].uniq |
Input device unique number |
kern.evdev.rcpt_mask |
Who is receiving events: bit0 - sysmouse, bit1 - kbdmux, bit2 - mouse hardware, bit3 - keyboard hardware |
kern.evdev.sysmouse_t_axis |
Extract T-axis from 0-none, 1-ums, 2-psm, 3-wsp |
kern.eventtimer |
Event timers |
kern.eventtimer.choice |
Present event timers |
kern.eventtimer.et |
|
kern.eventtimer.et.HPET |
event timer description |
kern.eventtimer.et.HPET.flags |
Event timer capabilities |
kern.eventtimer.et.HPET.frequency |
Event timer base frequency |
kern.eventtimer.et.HPET.quality |
Goodness of event timer |
kern.eventtimer.et.HPET1 |
event timer description |
kern.eventtimer.et.HPET1.flags |
Event timer capabilities |
kern.eventtimer.et.HPET1.frequency |
Event timer base frequency |
kern.eventtimer.et.HPET1.quality |
Goodness of event timer |
kern.eventtimer.et.HPET2 |
event timer description |
kern.eventtimer.et.HPET2.flags |
Event timer capabilities |
kern.eventtimer.et.HPET2.frequency |
Event timer base frequency |
kern.eventtimer.et.HPET2.quality |
Goodness of event timer |
kern.eventtimer.et.HPET3 |
event timer description |
kern.eventtimer.et.HPET3.flags |
Event timer capabilities |
kern.eventtimer.et.HPET3.frequency |
Event timer base frequency |
kern.eventtimer.et.HPET3.quality |
Goodness of event timer |
kern.eventtimer.et.HPET4 |
event timer description |
kern.eventtimer.et.HPET4.flags |
Event timer capabilities |
kern.eventtimer.et.HPET4.frequency |
Event timer base frequency |
kern.eventtimer.et.HPET4.quality |
Goodness of event timer |
kern.eventtimer.et.HPET5 |
event timer description |
kern.eventtimer.et.HPET5.flags |
Event timer capabilities |
kern.eventtimer.et.HPET5.frequency |
Event timer base frequency |
kern.eventtimer.et.HPET5.quality |
Goodness of event timer |
kern.eventtimer.et.HPET6 |
event timer description |
kern.eventtimer.et.HPET6.flags |
Event timer capabilities |
kern.eventtimer.et.HPET6.frequency |
Event timer base frequency |
kern.eventtimer.et.HPET6.quality |
Goodness of event timer |
kern.eventtimer.et.HPET7 |
event timer description |
kern.eventtimer.et.HPET7.flags |
Event timer capabilities |
kern.eventtimer.et.HPET7.frequency |
Event timer base frequency |
kern.eventtimer.et.HPET7.quality |
Goodness of event timer |
kern.eventtimer.et.Hyper-V |
event timer description |
kern.eventtimer.et.Hyper-V.flags |
Event timer capabilities |
kern.eventtimer.et.Hyper-V.frequency |
Event timer base frequency |
kern.eventtimer.et.Hyper-V.quality |
Goodness of event timer |
kern.eventtimer.et.LAPIC |
event timer description |
kern.eventtimer.et.LAPIC.flags |
Event timer capabilities |
kern.eventtimer.et.LAPIC.frequency |
Event timer base frequency |
kern.eventtimer.et.LAPIC.quality |
Goodness of event timer |
kern.eventtimer.et.RTC |
event timer description |
kern.eventtimer.et.RTC.flags |
Event timer capabilities |
kern.eventtimer.et.RTC.frequency |
Event timer base frequency |
kern.eventtimer.et.RTC.quality |
Goodness of event timer |
kern.eventtimer.et.i8254 |
event timer description |
kern.eventtimer.et.i8254.flags |
Event timer capabilities |
kern.eventtimer.et.i8254.frequency |
Event timer base frequency |
kern.eventtimer.et.i8254.quality |
Goodness of event timer |
kern.eventtimer.idletick |
Run periodic events when idle |
kern.eventtimer.periodic |
Enable event timer periodic mode |
kern.eventtimer.singlemul |
Multiplier for periodic mode |
kern.eventtimer.timer |
Chosen event timer |
kern.fallback_elf_brand |
compatibility for kern.fallback_elf_brand |
kern.features |
Kernel Features |
kern.features.aio |
Asynchronous I/O |
kern.features.ata_cam |
ATA devices are accessed through the cam(4) driver |
kern.features.audit |
BSM audit support |
kern.features.compat_freebsd10 |
Compatible with FreeBSD 10 |
kern.features.compat_freebsd11 |
Compatible with FreeBSD 11 |
kern.features.compat_freebsd12 |
Compatible with FreeBSD 12 |
kern.features.compat_freebsd13 |
Compatible with FreeBSD 13 |
kern.features.compat_freebsd14 |
Compatible with FreeBSD 14 |
kern.features.compat_freebsd32 |
Compatible with 32-bit FreeBSD |
kern.features.compat_freebsd4 |
Compatible with FreeBSD 4 |
kern.features.compat_freebsd5 |
Compatible with FreeBSD 5 |
kern.features.compat_freebsd6 |
Compatible with FreeBSD 6 |
kern.features.compat_freebsd7 |
Compatible with FreeBSD 7 |
kern.features.compat_freebsd9 |
Compatible with FreeBSD 9 |
kern.features.compat_freebsd_32bit |
Compatible with 32-bit FreeBSD (legacy feature name) |
kern.features.cuse |
Userspace character devices |
kern.features.debugnet |
Debugnet support |
kern.features.ekcd |
Encrypted kernel crash dumps support |
kern.features.evdev |
Input event devices support |
kern.features.evdev_support |
Evdev support in hybrid drivers |
kern.features.ffs_snapshot |
FFS snapshot support |
kern.features.geom_eli |
GEOM crypto module |
kern.features.geom_label |
GEOM labeling support |
kern.features.geom_mirror |
GEOM mirroring support |
kern.features.geom_part_bsd |
GEOM partitioning class for BSD disklabels |
kern.features.geom_part_ebr |
GEOM partitioning class for extended boot records support |
kern.features.geom_part_ebr_compat |
GEOM EBR partitioning class: backward-compatible partition names |
kern.features.geom_part_gpt |
GEOM partitioning class for GPT partitions support |
kern.features.geom_part_mbr |
GEOM partitioning class for MBR support |
kern.features.hwpmc_hooks |
Kernel support for HW PMC |
kern.features.inet |
Internet Protocol version 4 |
kern.features.inet6 |
Internet Protocol version 6 |
kern.features.invariant_support |
Support for modules compiled with INVARIANTS option |
kern.features.invariants |
Kernel compiled with INVARIANTS, may affect performance |
kern.features.ipv4_rfc5549_support |
Route IPv4 packets via IPv6 nexthops |
kern.features.kdtrace_hooks |
Kernel DTrace hooks which are required to load DTrace kernel modules |
kern.features.kposix_priority_scheduling |
POSIX P1003.1B realtime extensions |
kern.features.ktrace |
Kernel support for system-call tracing |
kern.features.linux |
Linux 32bit support |
kern.features.linux64 |
Linux 64bit support |
kern.features.linuxulator_v4l |
V4L ioctl wrapper support in the linuxulator |
kern.features.linuxulator_v4l2 |
V4L2 ioctl wrapper support in the linuxulator |
kern.features.netdump |
Netdump client support |
kern.features.netgdb |
NetGDB support |
kern.features.netlink |
Netlink support |
kern.features.nfscl |
NFSv4 client |
kern.features.nfsd |
NFSv4 server |
kern.features.p1003_1b_mqueue |
POSIX P1003.1B message queues support |
kern.features.posix_shm |
POSIX shared memory |
kern.features.pps_sync |
Support usage of external PPS signal by kernel PLL |
kern.features.process_descriptors |
Process Descriptors |
kern.features.racct |
Resource Accounting |
kern.features.rctl |
Resource Limits |
kern.features.scbus |
SCSI devices support |
kern.features.security_capabilities |
Capsicum Capabilities |
kern.features.security_capability_mode |
Capsicum Capability Mode |
kern.features.security_mac |
Mandatory Access Control Framework support |
kern.features.softupdates |
FFS soft-updates support |
kern.features.stack |
Support for capturing kernel stack |
kern.features.sysv_msg |
System V message queues support |
kern.features.sysv_sem |
System V semaphores support |
kern.features.sysv_shm |
System V shared memory segments support |
kern.features.ufs_acl |
ACL support for UFS |
kern.features.ufs_gjournal |
Journaling support through GEOM for UFS |
kern.features.ufs_quota |
UFS disk quotas support |
kern.features.ufs_quota64 |
64bit UFS disk quotas support |
kern.features.vimage |
VIMAGE kernel virtualization |
kern.features.witness |
kernel has witness(9) support |
kern.features.zfs |
OpenZFS support |
kern.file |
Entire file table |
kern.filedelay |
Time to delay syncing files (in seconds) |
kern.forcesigexit |
Force trap signal to be handled |
kern.fscale |
Fixed-point scale factor used for calculating load average values |
kern.function_list |
kernel function list |
kern.geom |
GEOMetry management |
kern.geom.collectstats |
Control statistics collection on GEOM providers and consumers |
kern.geom.confdot |
Dump the GEOM config in dot |
kern.geom.conftxt |
Dump the GEOM config in txt |
kern.geom.confxml |
Dump the GEOM config in XML |
kern.geom.debugflags |
Set various trace levels for GEOM debugging |
kern.geom.dev |
GEOM_DEV stuff |
kern.geom.dev.delete_max_sectors |
Maximum number of sectors in a single delete request sent to the provider. Larger requests are chunked so they can be interrupted. (0 = disable chunking) |
kern.geom.disk |
GEOM_DISK stuff |
kern.geom.disk.ada0 |
GEOM disk ada0 |
kern.geom.disk.ada0.flags |
Report disk flags |
kern.geom.disk.ada0.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.ada0.led |
LED name |
kern.geom.disk.ada1 |
GEOM disk ada1 |
kern.geom.disk.ada1.flags |
Report disk flags |
kern.geom.disk.ada1.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.ada1.led |
LED name |
kern.geom.disk.ada2 |
GEOM disk ada2 |
kern.geom.disk.ada2.flags |
Report disk flags |
kern.geom.disk.ada2.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.ada2.led |
LED name |
kern.geom.disk.ada3 |
GEOM disk ada3 |
kern.geom.disk.ada3.flags |
Report disk flags |
kern.geom.disk.ada3.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.ada3.led |
LED name |
kern.geom.disk.cd0 |
GEOM disk cd0 |
kern.geom.disk.cd0.flags |
Report disk flags |
kern.geom.disk.cd0.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.cd0.led |
LED name |
kern.geom.disk.da0 |
GEOM disk da0 |
kern.geom.disk.da0.flags |
Report disk flags |
kern.geom.disk.da0.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da0.led |
LED name |
kern.geom.disk.da1 |
GEOM disk da1 |
kern.geom.disk.da1.flags |
Report disk flags |
kern.geom.disk.da1.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da1.led |
LED name |
kern.geom.disk.da10 |
GEOM disk da10 |
kern.geom.disk.da10.flags |
Report disk flags |
kern.geom.disk.da10.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da10.led |
LED name |
kern.geom.disk.da11 |
GEOM disk da11 |
kern.geom.disk.da11.flags |
Report disk flags |
kern.geom.disk.da11.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da11.led |
LED name |
kern.geom.disk.da12 |
GEOM disk da12 |
kern.geom.disk.da12.flags |
Report disk flags |
kern.geom.disk.da12.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da12.led |
LED name |
kern.geom.disk.da13 |
GEOM disk da13 |
kern.geom.disk.da13.flags |
Report disk flags |
kern.geom.disk.da13.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da13.led |
LED name |
kern.geom.disk.da14 |
GEOM disk da14 |
kern.geom.disk.da14.flags |
Report disk flags |
kern.geom.disk.da14.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da14.led |
LED name |
kern.geom.disk.da15 |
GEOM disk da15 |
kern.geom.disk.da15.flags |
Report disk flags |
kern.geom.disk.da15.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da15.led |
LED name |
kern.geom.disk.da2 |
GEOM disk da2 |
kern.geom.disk.da2.flags |
Report disk flags |
kern.geom.disk.da2.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da2.led |
LED name |
kern.geom.disk.da3 |
GEOM disk da3 |
kern.geom.disk.da3.flags |
Report disk flags |
kern.geom.disk.da3.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da3.led |
LED name |
kern.geom.disk.da4 |
GEOM disk da4 |
kern.geom.disk.da4.flags |
Report disk flags |
kern.geom.disk.da4.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da4.led |
LED name |
kern.geom.disk.da5 |
GEOM disk da5 |
kern.geom.disk.da5.flags |
Report disk flags |
kern.geom.disk.da5.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da5.led |
LED name |
kern.geom.disk.da6 |
GEOM disk da6 |
kern.geom.disk.da6.flags |
Report disk flags |
kern.geom.disk.da6.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da6.led |
LED name |
kern.geom.disk.da7 |
GEOM disk da7 |
kern.geom.disk.da7.flags |
Report disk flags |
kern.geom.disk.da7.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da7.led |
LED name |
kern.geom.disk.da8 |
GEOM disk da8 |
kern.geom.disk.da8.flags |
Report disk flags |
kern.geom.disk.da8.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da8.led |
LED name |
kern.geom.disk.da9 |
GEOM disk da9 |
kern.geom.disk.da9.flags |
Report disk flags |
kern.geom.disk.da9.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.da9.led |
LED name |
kern.geom.disk.nda0 |
GEOM disk nda0 |
kern.geom.disk.nda0.flags |
Report disk flags |
kern.geom.disk.nda0.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.nda0.led |
LED name |
kern.geom.disk.nda1 |
GEOM disk nda1 |
kern.geom.disk.nda1.flags |
Report disk flags |
kern.geom.disk.nda1.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.nda1.led |
LED name |
kern.geom.disk.nda2 |
GEOM disk nda2 |
kern.geom.disk.nda2.flags |
Report disk flags |
kern.geom.disk.nda2.flush_notsup_succeed |
Do not return EOPNOTSUPP if there is no cache to flush |
kern.geom.disk.nda2.led |
LED name |
kern.geom.eli |
GEOM_ELI stuff |
kern.geom.eli.batch |
Use crypto operations batching |
kern.geom.eli.blocking_malloc |
Use blocking malloc calls for GELI buffers |
kern.geom.eli.boot_passcache |
Passphrases are cached during boot process for possible reuse |
kern.geom.eli.debug |
Debug level |
kern.geom.eli.key_cache_hits |
Key cache hits |
kern.geom.eli.key_cache_limit |
Maximum number of encryption keys to cache |
kern.geom.eli.key_cache_misses |
Key cache misses |
kern.geom.eli.minbufs |
Number of GELI bufs reserved for swap transactions |
kern.geom.eli.overwrites |
Number of times on-disk keys should be overwritten when destroying them |
kern.geom.eli.threads |
Number of threads doing crypto work |
kern.geom.eli.tries |
Number of tries for entering the passphrase |
kern.geom.eli.unmapped_io |
Enable support for unmapped I/O |
kern.geom.eli.use_uma_bytes |
Use uma(9) for allocations of this size or smaller. |
kern.geom.eli.version |
GELI version |
kern.geom.eli.visible_passphrase |
Visibility of passphrase prompt (0 = invisible, 1 = visible, 2 = asterisk) |
kern.geom.inflight_transient_maps |
Current count of the active transient maps |
kern.geom.label |
GEOM_LABEL stuff |
kern.geom.label.debug |
Debug level |
kern.geom.label.disk_ident |
|
kern.geom.label.disk_ident.enable |
Create device nodes for drives which export a disk identification string |
kern.geom.label.ext2fs |
|
kern.geom.label.ext2fs.enable |
Create device nodes for EXT2FS volumes |
kern.geom.label.flashmap |
|
kern.geom.label.flashmap.enable |
Create device nodes for Flashmap labels |
kern.geom.label.gpt |
|
kern.geom.label.gpt.enable |
Create device nodes for GPT labels |
kern.geom.label.gptid |
|
kern.geom.label.gptid.enable |
Create device nodes for GPT UUIDs |
kern.geom.label.iso9660 |
|
kern.geom.label.iso9660.enable |
Create device nodes for ISO9660 volume names |
kern.geom.label.msdosfs |
|
kern.geom.label.msdosfs.enable |
Create device nodes for MSDOSFS volumes |
kern.geom.label.ntfs |
|
kern.geom.label.ntfs.enable |
Create device nodes for NTFS volumes |
kern.geom.label.reiserfs |
|
kern.geom.label.reiserfs.enable |
Create device nodes for REISERFS volumes |
kern.geom.label.swaplinux |
|
kern.geom.label.swaplinux.enable |
Create device nodes for Linux swap |
kern.geom.label.ufs |
|
kern.geom.label.ufs.enable |
Create device nodes for UFS volume names |
kern.geom.label.ufsid |
|
kern.geom.label.ufsid.enable |
Create device nodes for UFS file system IDs |
kern.geom.mirror |
GEOM_MIRROR stuff |
kern.geom.mirror.debug |
Debug level |
kern.geom.mirror.disconnect_on_failure |
Disconnect component on I/O failure. |
kern.geom.mirror.idletime |
Mark components as clean when idling |
kern.geom.mirror.launch_mirror_before_timeout |
If false, force gmirror to wait out the full kern.geom.mirror.timeout before launching mirrors |
kern.geom.mirror.sync_requests |
Parallel synchronization I/O requests. |
kern.geom.mirror.sync_update_period |
Metadata update period during synchronization, in seconds |
kern.geom.mirror.timeout |
Time to wait on all mirror components |
kern.geom.nomem_count |
Total count of requests completed with status of ENOMEM |
kern.geom.notaste |
Prevent GEOM tasting |
kern.geom.part |
GEOM_PART stuff |
kern.geom.part.allow_nesting |
Allow additional levels of nesting |
kern.geom.part.auto_resize |
Enable auto resize |
kern.geom.part.check_integrity |
Enable integrity checking |
kern.geom.part.ebr |
GEOM_PART_EBR Extended Boot Record |
kern.geom.part.ebr.compat_aliases |
Set non-zero to enable EBR compatibility alias names (e.g., ada0p5) |
kern.geom.part.gpt |
GEOM_PART_GPT GUID Partition Table |
kern.geom.part.gpt.allow_nesting |
Allow GPT to be nested inside other schemes |
kern.geom.part.mbr |
GEOM_PART_MBR Master Boot Record |
kern.geom.part.mbr.enforce_chs |
Enforce alignment to CHS addressing |
kern.geom.part.separator |
Partition name separator |
kern.geom.pause_count |
Total count of requests stalled due to low memory in g_down |
kern.geom.raid |
GEOM_RAID stuff |
kern.geom.raid.aggressive_spare |
Use disks without metadata as spare |
kern.geom.raid.clean_time |
Mark volume as clean when idling |
kern.geom.raid.concat |
CONCAT transformation module |
kern.geom.raid.concat.enable |
Enable CONCAT transformation module taste |
kern.geom.raid.ddf |
DDF metadata module |
kern.geom.raid.ddf.enable |
Enable DDF metadata format taste |
kern.geom.raid.debug |
Debug level |
kern.geom.raid.disconnect_on_failure |
Disconnect component on I/O failure. |
kern.geom.raid.enable |
Enable on-disk metadata taste |
kern.geom.raid.idle_threshold |
Time in microseconds to consider a volume idle. |
kern.geom.raid.intel |
Intel metadata module |
kern.geom.raid.intel.enable |
Enable Intel metadata format taste |
kern.geom.raid.jmicron |
JMicron metadata module |
kern.geom.raid.jmicron.enable |
Enable JMicron metadata format taste |
kern.geom.raid.name_format |
Providers name format. |
kern.geom.raid.nvidia |
NVIDIA metadata module |
kern.geom.raid.nvidia.enable |
Enable NVIDIA metadata format taste |
kern.geom.raid.promise |
Promise metadata module |
kern.geom.raid.promise.enable |
Enable Promise metadata format taste |
kern.geom.raid.raid0 |
RAID0 transformation module |
kern.geom.raid.raid0.enable |
Enable RAID0 transformation module taste |
kern.geom.raid.raid1 |
RAID1 transformation module |
kern.geom.raid.raid1.enable |
Enable RAID1 transformation module taste |
kern.geom.raid.raid1.rebuild_cluster_idle |
Number of slabs to do each time we trigger a rebuild cycle |
kern.geom.raid.raid1.rebuild_fair_io |
Fraction of the I/O bandwidth to use when disk busy for rebuild. |
kern.geom.raid.raid1.rebuild_meta_update |
When to update the meta data. |
kern.geom.raid.raid1.rebuild_slab_size |
Amount of the disk to rebuild each read/write cycle of the rebuild. |
kern.geom.raid.raid1e |
RAID1E transformation module |
kern.geom.raid.raid1e.enable |
Enable RAID1E transformation module taste |
kern.geom.raid.raid1e.rebuild_cluster_idle |
Number of slabs to do each time we trigger a rebuild cycle |
kern.geom.raid.raid1e.rebuild_fair_io |
Fraction of the I/O bandwidth to use when disk busy for rebuild. |
kern.geom.raid.raid1e.rebuild_meta_update |
When to update the meta data. |
kern.geom.raid.raid1e.rebuild_slab_size |
Amount of the disk to rebuild each read/write cycle of the rebuild. |
kern.geom.raid.raid5 |
RAID5 transformation module |
kern.geom.raid.raid5.enable |
Enable RAID5 transformation module taste |
kern.geom.raid.read_err_thresh |
Number of read errors equated to disk failure |
kern.geom.raid.sii |
SiI metadata module |
kern.geom.raid.sii.enable |
Enable SiI metadata format taste |
kern.geom.raid.start_timeout |
Time to wait for all array components |
kern.geom.transient_map_hard_failures |
Failures to establish the transient mapping due to retry attempts exhausted |
kern.geom.transient_map_retries |
Max count of retries used before giving up on creating transient map |
kern.geom.transient_map_soft_failures |
Count of retried failures to establish the transient mapping |
kern.geom.transient_maps |
Total count of the transient mapping requests |
kern.hostid |
Host ID |
kern.hostname |
Hostname |
kern.hostuuid |
Host UUID |
kern.hwpmc |
HWPMC parameters |
kern.hwpmc.softevents |
maximum number of soft events |
kern.hz |
Number of clock ticks per second |
kern.hz_max |
Maximum hz value supported |
kern.hz_min |
Minimum hz value supported |
kern.ident |
Kernel identifier |
kern.init_path |
Path used to search the init process |
kern.init_shutdown_timeout |
Shutdown timeout of init(8). Unused within kernel, but used to control init(8) |
kern.iov_max |
Maximum number of elements in an I/O vector; sysconf(_SC_IOV_MAX) |
kern.ipc |
IPC |
kern.ipc.aio |
socket AIO stats |
kern.ipc.aio.empty_results |
socket operation returned EAGAIN |
kern.ipc.aio.empty_retries |
socket operation retries |
kern.ipc.aio.lifetime |
Maximum lifetime for idle aiod |
kern.ipc.aio.max_procs |
Maximum number of kernel processes to use for async socket IO |
kern.ipc.aio.num_procs |
Number of active kernel processes for async socket IO |
kern.ipc.aio.target_procs |
Preferred number of ready kernel processes for async socket IO |
kern.ipc.max_hdr |
Size of largest link plus protocol header |
kern.ipc.max_linkhdr |
Size of largest link layer header |
kern.ipc.max_protohdr |
Size of largest protocol layer header |
kern.ipc.maxmbufmem |
Maximum real memory allocatable to various mbuf types |
kern.ipc.maxpipekva |
Pipe KVA limit |
kern.ipc.maxsockbuf |
Maximum socket buffer size |
kern.ipc.maxsockets |
Maximum number of sockets available |
kern.ipc.mb_use_ext_pgs |
Use unmapped mbufs for sendfile(2) and TLS offload |
kern.ipc.msgmax |
Maximum message size |
kern.ipc.msgmnb |
Maximum number of bytes in a queue |
kern.ipc.msgmni |
Number of message queue identifiers |
kern.ipc.msgseg |
Number of message segments |
kern.ipc.msgssz |
Size of a message segment |
kern.ipc.msgtql |
Maximum number of messages in the system |
kern.ipc.msqids |
Array of struct msqid_kernel for each potential message queue |
kern.ipc.nmbclusters |
Maximum number of mbuf clusters allowed |
kern.ipc.nmbjumbo16 |
Maximum number of mbuf 16k jumbo clusters allowed |
kern.ipc.nmbjumbo9 |
Maximum number of mbuf 9k jumbo clusters allowed |
kern.ipc.nmbjumbop |
Maximum number of mbuf page size jumbo clusters allowed |
kern.ipc.nmbufs |
Maximum number of mbufs allowed |
kern.ipc.num_snd_tags |
# of active mbuf send tags |
kern.ipc.numopensockets |
Number of open sockets |
kern.ipc.pipe_mindirect |
Minimum write size triggering VM optimization |
kern.ipc.pipeallocfail |
Pipe allocation failures |
kern.ipc.pipebuf_reserv |
Superuser-reserved percentage of the pipe buffers space |
kern.ipc.pipefragretry |
Pipe allocation retries due to fragmentation |
kern.ipc.pipekva |
Pipe KVA usage |
kern.ipc.piperesizeallowed |
Pipe resizing allowed |
kern.ipc.piperesizefail |
Pipe resize failures |
kern.ipc.posix_shm_list |
POSIX SHM list |
kern.ipc.sema |
Array of struct semid_kernel for each potential semaphore |
kern.ipc.semaem |
Adjust on exit max value |
kern.ipc.semmni |
Number of semaphore identifiers |
kern.ipc.semmns |
Maximum number of semaphores in the system |
kern.ipc.semmnu |
Maximum number of undo structures in the system |
kern.ipc.semmsl |
Max semaphores per id |
kern.ipc.semopm |
Max operations per semop call |
kern.ipc.semume |
Max undo entries per process |
kern.ipc.semusz |
Size in bytes of undo structure |
kern.ipc.semvmx |
Semaphore maximum value |
kern.ipc.sfstat |
sendfile statistics |
kern.ipc.shm_allow_removed |
Enable/Disable attachment to attached segments marked for removal |
kern.ipc.shm_use_phys |
Enable/Disable locking of shared memory pages in core |
kern.ipc.shmall |
Maximum number of pages available for shared memory |
kern.ipc.shmmax |
Maximum shared memory segment size |
kern.ipc.shmmin |
Minimum shared memory segment size |
kern.ipc.shmmni |
Number of shared memory identifiers |
kern.ipc.shmseg |
Number of segments per process |
kern.ipc.shmsegs |
Array of struct shmid_kernel for each potential shared memory segment |
kern.ipc.soacceptqueue |
Maximum listen socket pending connection accept queue size |
kern.ipc.sockbuf_waste_factor |
Socket buffer size waste factor |
kern.ipc.somaxconn |
Maximum listen socket pending connection accept queue size (compat) |
kern.ipc.sooverinterval |
Delay in seconds between warnings for listen socket overflows |
kern.ipc.sooverprio |
Log priority for listen socket overflows: 0..7 or -1 to disable |
kern.ipc.splice |
Settings relating to the SO_SPLICE socket option |
kern.ipc.splice.receive_stream |
Use soreceive_stream() for stream splices |
kern.ipc.tls |
Kernel TLS offload |
kern.ipc.tls.bind_threads |
Bind crypto threads to cores (1) or cores and domains (2) at boot |
kern.ipc.tls.cbc_enable |
Enable support of AES-CBC crypto for kernel TLS |
kern.ipc.tls.enable |
Enable support for kernel TLS offload |
kern.ipc.tls.ifnet |
Hardware (ifnet) TLS session stats |
kern.ipc.tls.ifnet.cbc |
Active number of ifnet TLS sessions using AES-CBC |
kern.ipc.tls.ifnet.chacha20 |
Active number of ifnet TLS sessions using Chacha20-Poly1305 |
kern.ipc.tls.ifnet.gcm |
Active number of ifnet TLS sessions using AES-GCM |
kern.ipc.tls.ifnet.permitted |
Whether to permit hardware (ifnet) TLS sessions |
kern.ipc.tls.ifnet.reset |
TLS sessions updated to a new ifnet send tag |
kern.ipc.tls.ifnet.reset_dropped |
TLS sessions dropped after failing to update ifnet send tag |
kern.ipc.tls.ifnet.reset_failed |
TLS sessions that failed to allocate a new ifnet send tag |
kern.ipc.tls.ifnet_max_rexmit_pct |
Max percent bytes retransmitted before ifnet TLS is disabled |
kern.ipc.tls.max_alloc |
Max number of 16k buffers to allocate in thread context |
kern.ipc.tls.max_reclaim |
Max number of 16k buffers to reclaim in thread context |
kern.ipc.tls.maxlen |
Maximum TLS record size |
kern.ipc.tls.stats |
Kernel TLS offload stats |
kern.ipc.tls.stats.active |
Total Active TLS sessions |
kern.ipc.tls.stats.corrupted_records |
Total corrupted TLS records received |
kern.ipc.tls.stats.destroy_task |
Number of times ktls session was destroyed via taskqueue |
kern.ipc.tls.stats.enable_calls |
Total number of TLS enable calls made |
kern.ipc.tls.stats.failed_crypto |
Total TLS crypto failures |
kern.ipc.tls.stats.ifnet_disable_failed |
TLS sessions unable to switch to SW from ifnet |
kern.ipc.tls.stats.ifnet_disable_ok |
TLS sessions able to switch to SW from ifnet |
kern.ipc.tls.stats.ocf |
Kernel TLS offload via OCF stats |
kern.ipc.tls.stats.ocf.inplace |
Total number of OCF in-place operations |
kern.ipc.tls.stats.ocf.retries |
Number of OCF encryption operation retries |
kern.ipc.tls.stats.ocf.separate_output |
Total number of OCF operations with a separate output buffer |
kern.ipc.tls.stats.ocf.tls10_cbc_encrypts |
Total number of OCF TLS 1.0 CBC encryption operations |
kern.ipc.tls.stats.ocf.tls11_cbc_decrypts |
Total number of OCF TLS 1.1/1.2 CBC decryption operations |
kern.ipc.tls.stats.ocf.tls11_cbc_encrypts |
Total number of OCF TLS 1.1/1.2 CBC encryption operations |
kern.ipc.tls.stats.ocf.tls12_chacha20_decrypts |
Total number of OCF TLS 1.2 Chacha20-Poly1305 decryption operations |
kern.ipc.tls.stats.ocf.tls12_chacha20_encrypts |
Total number of OCF TLS 1.2 Chacha20-Poly1305 encryption operations |
kern.ipc.tls.stats.ocf.tls12_gcm_decrypts |
Total number of OCF TLS 1.2 GCM decryption operations |
kern.ipc.tls.stats.ocf.tls12_gcm_encrypts |
Total number of OCF TLS 1.2 GCM encryption operations |
kern.ipc.tls.stats.ocf.tls12_gcm_recrypts |
Total number of OCF TLS 1.2 GCM re-encryption operations |
kern.ipc.tls.stats.ocf.tls13_chacha20_decrypts |
Total number of OCF TLS 1.3 Chacha20-Poly1305 decryption operations |
kern.ipc.tls.stats.ocf.tls13_chacha20_encrypts |
Total number of OCF TLS 1.3 Chacha20-Poly1305 encryption operations |
kern.ipc.tls.stats.ocf.tls13_gcm_decrypts |
Total number of OCF TLS 1.3 GCM decryption operations |
kern.ipc.tls.stats.ocf.tls13_gcm_encrypts |
Total number of OCF TLS 1.3 GCM encryption operations |
kern.ipc.tls.stats.ocf.tls13_gcm_recrypts |
Total number of OCF TLS 1.3 GCM re-encryption operations |
kern.ipc.tls.stats.offload_total |
Total successful TLS setups (parameters set) |
kern.ipc.tls.stats.sw_rx_inqueue |
Number of TLS sockets in queue to tasks for SW decryption |
kern.ipc.tls.stats.sw_tx_inqueue |
Number of TLS records in queue to tasks for SW encryption |
kern.ipc.tls.stats.sw_tx_pending |
Number of TLS 1.0 records waiting for earlier TLS records |
kern.ipc.tls.stats.switch_failed |
TLS sessions unable to switch between SW and ifnet |
kern.ipc.tls.stats.switch_to_ifnet |
TLS sessions switched from SW to ifnet |
kern.ipc.tls.stats.switch_to_sw |
TLS sessions switched from ifnet to SW |
kern.ipc.tls.stats.threads |
Number of TLS threads in thread-pool |
kern.ipc.tls.sw |
Software TLS session stats |
kern.ipc.tls.sw.cbc |
Active number of software TLS sessions using AES-CBC |
kern.ipc.tls.sw.chacha20 |
Active number of software TLS sessions using Chacha20-Poly1305 |
kern.ipc.tls.sw.gcm |
Active number of software TLS sessions using AES-GCM |
kern.ipc.tls.sw_buffer_cache |
Enable caching of output buffers for SW encryption |
kern.ipc.tls.tasks_active |
Number of active tasks |
kern.ipc.tls.toe |
TOE TLS session stats |
kern.ipc.tls.toe.cbc |
Active number of TOE TLS sessions using AES-CBC |
kern.ipc.tls.toe.chacha20 |
Active number of TOE TLS sessions using Chacha20-Poly1305 |
kern.ipc.tls.toe.gcm |
Active number of TOE TLS sessions using AES-GCM |
kern.ipc.umtx_max_robust |
Maximum number of robust mutexes allowed for each thread |
kern.ipc.umtx_vnode_persistent |
False forces destruction of umtx attached to file, on last close |
kern.job_control |
Whether job control is available |
kern.kerneldump_gzlevel |
Kernel crash dump compression level |
kern.kill_on_debugger_exit |
Kill ptraced processes when debugger exits |
kern.kobj_methodcount |
Number of kernel object methods registered |
kern.kq_calloutmax |
Maximum number of callouts allocated for kqueue |
kern.kstack_pages |
Kernel stack size in pages |
kern.ktrace |
KTRACE options |
kern.ktrace.filesize_limit_signal |
Send SIGXFSZ to the traced process when the log size limit is exceeded |
kern.ktrace.genio_size |
Maximum size of genio event payload |
kern.ktrace.request_pool |
Pool buffer size for ktrace(1) |
kern.lastpid |
Last used PID |
kern.lockf |
Advisory locks table |
kern.log_console_add_linefeed |
log_console() adds extra newlines |
kern.log_console_output |
Duplicate console output to the syslog |
kern.log_wakeups_per_second |
How often (times per second) to check for /dev/log waiters. |
kern.lognosys |
Log invalid syscalls |
kern.logsigexit |
Log processes quitting on abnormal signals to syslog(3) |
kern.malloc_count |
Count of kernel malloc types |
kern.malloc_stats |
Return malloc types |
kern.maxbcache |
Maximum value of vfs.maxbufspace |
kern.maxdsiz |
Maximum data size |
kern.maxfiles |
Maximum number of files |
kern.maxfilesperproc |
Maximum files allowed open per process |
kern.maxphys |
Maximum block I/O access size |
kern.maxproc |
Maximum number of processes |
kern.maxprocperuid |
Maximum processes allowed per userid |
kern.maxssiz |
Maximum stack size |
kern.maxswzone |
Maximum memory for swap metadata |
kern.maxthread |
Maximum number of threads |
kern.maxtsiz |
Maximum text size |
kern.maxusers |
Hint for kernel tuning |
kern.maxvnodes |
Target for maximum number of vnodes (legacy) |
kern.metadelay |
Time to delay syncing metadata (in seconds) |
kern.minvnodes |
Old name for vfs.wantfreevnodes (legacy) |
kern.module_path |
module load search path |
kern.mqueue |
POSIX real time message queue |
kern.mqueue.curmq |
current message queue number |
kern.mqueue.default_maxmsg |
Default maximum messages in queue |
kern.mqueue.default_msgsize |
Default maximum message size |
kern.mqueue.maxmq |
maximum message queues |
kern.mqueue.maxmsg |
maximum messages in queue |
kern.mqueue.maxmsgsize |
maximum message size |
kern.msgbuf |
Contents of kernel message buffer |
kern.msgbuf_clear |
Clear kernel message buffer |
kern.msgbuf_show_timestamp |
Show timestamp in msgbuf |
kern.msgbufsize |
Size of the kernel message buffer |
kern.nbuf |
Number of buffers in the buffer cache |
kern.ncallout |
Number of entries in callwheel and size of timeout() preallocation |
kern.ngroups |
Maximum number of supplemental groups a user can belong to |
kern.nodump_coredump |
Enable setting the NODUMP flag on coredump files |
kern.nswbuf |
Number of swap buffers |
kern.ntp_pll |
|
kern.ntp_pll.gettime |
|
kern.ntp_pll.pps_freq |
Scaled frequency offset (ns/sec) |
kern.ntp_pll.pps_shift |
Interval duration (sec) (shift) |
kern.ntp_pll.pps_shiftmax |
Max interval duration (sec) (shift) |
kern.ntp_pll.time_freq |
Frequency offset (ns/sec) |
kern.ntp_pll.time_monitor |
Last time offset scaled (ns) |
kern.openfiles |
System-wide number of open files |
kern.osreldate |
Kernel release date |
kern.osrelease |
Operating system release |
kern.osrevision |
Operating system revision |
kern.ostype |
Operating system type |
kern.panic_reboot_wait_time |
Seconds to wait before rebooting after a panic |
kern.pid_max |
Maximum allowed pid |
kern.pid_max_limit |
Maximum allowed pid (kern.pid_max) top limit |
kern.pin_default_swi |
Pin the default (non-per-cpu) swi (shared with PCPU 0 swi) |
kern.pin_pcpu_swi |
Pin the per-CPU swis (except PCPU 0, which is also default) |
kern.posix1version |
Version of POSIX attempting to comply to |
kern.powercycle_on_panic |
Do a power cycle instead of a reboot on a panic |
kern.poweroff_on_panic |
Do a power off instead of a reboot on a panic |
kern.proc |
Process table |
kern.proc.all |
Return entire process table |
kern.proc.args |
Process argument list |
kern.proc.auxv |
Process ELF auxiliary vector |
kern.proc.cwd |
Process current working directory |
kern.proc.env |
Process environment |
kern.proc.filedesc |
Process filedesc entries |
kern.proc.gid |
Process table |
kern.proc.gid_td |
Process table |
kern.proc.groups |
Process groups |
kern.proc.kq |
KQueue events |
kern.proc.kstack |
Process kernel stacks |
kern.proc.nfds |
Number of open file descriptors |
kern.proc.ofiledesc |
Process ofiledesc entries |
kern.proc.osrel |
Process binary osreldate |
kern.proc.ovmmap |
Old Process vm map entries |
kern.proc.pathname |
Process executable path |
kern.proc.pgrp |
Process table |
kern.proc.pgrp_td |
Process table |
kern.proc.pid |
Process table |
kern.proc.pid_td |
Process table |
kern.proc.proc |
Return process table, no threads |
kern.proc.proc_td |
Return process table, including threads |
kern.proc.ps_strings |
Process ps_strings location |
kern.proc.rgid |
Process table |
kern.proc.rgid_td |
Process table |
kern.proc.rlimit |
Process resource limits |
kern.proc.rlimit_usage |
Process limited resources usage info |
kern.proc.ruid |
Process table |
kern.proc.ruid_td |
Process table |
kern.proc.sid |
Process table |
kern.proc.sid_td |
Process table |
kern.proc.sigfastblk |
Thread sigfastblock address |
kern.proc.sigtramp |
Process signal trampoline location |
kern.proc.sv_name |
Process syscall vector name (ABI type) |
kern.proc.tty |
Process table |
kern.proc.tty_td |
Process table |
kern.proc.uid |
Process table |
kern.proc.uid_td |
Process table |
kern.proc.umask |
Process umask |
kern.proc.vm_layout |
Process virtual address space layout info |
kern.proc.vmmap |
Process vm map entries |
kern.proc_vmmap_skip_resident_count |
Skip calculation of the pages resident count in kern.proc.vmmap |
kern.ps_arg_cache_limit |
Process' command line characters cache limit |
kern.ps_strings |
Location of process' ps_strings structure |
kern.racct |
Resource Accounting |
kern.racct.enable |
Enable RACCT/RCTL |
kern.racct.pcpu_threshold |
Processes with higher %cpu usage than this value can be throttled. |
kern.racct.rctl |
Resource Limits |
kern.racct.rctl.devctl_rate_limit |
Maximum number of devctl messages per second |
kern.racct.rctl.log_rate_limit |
Maximum number of log messages per second |
kern.racct.rctl.maxbufsize |
Maximum output buffer size |
kern.racct.rctl.throttle_max |
Longest throttling duration, in hz |
kern.racct.rctl.throttle_min |
Shortest throttling duration, in hz |
kern.racct.rctl.throttle_pct |
Throttling penalty for process consumption, in percent |
kern.racct.rctl.throttle_pct2 |
Throttling penalty for container consumption, in percent |
kern.random |
Cryptographically Secure Random Number Generator |
kern.random.block_seeded_status |
If non-zero, pretend Fortuna is in an unseeded state. By setting this as a tunable, boot can be tested as if the random device is unavailable. |
kern.random.fortuna |
Fortuna Parameters |
kern.random.fortuna.concurrent_read |
If non-zero, enable feature to improve concurrent Fortuna performance. |
kern.random.fortuna.minpoolsize |
Minimum pool size necessary to cause a reseed |
kern.random.harvest |
Entropy Device Parameters |
kern.random.harvest.mask |
Entropy harvesting mask |
kern.random.harvest.mask_bin |
Entropy harvesting mask (printable) |
kern.random.harvest.mask_symbolic |
Entropy harvesting mask (symbolic) |
kern.random.initial_seeding |
Initial seeding control and information |
kern.random.initial_seeding.arc4random_bypassed_before_seeding |
If non-zero, the random device was bypassed when initially seeding the kernel arc4random(9), because the 'bypass_before_seeding' knob was enabled and a request was submitted prior to initial seeding. |
kern.random.initial_seeding.bypass_before_seeding |
If set non-zero, bypass the random device in requests for random data when the random device is not yet seeded. This is considered dangerous. Ordinarily, the random device will block requests until it is seeded by sufficient entropy. |
kern.random.initial_seeding.disable_bypass_warnings |
If non-zero, do not log a warning if the 'bypass_before_seeding' knob is enabled and a request is submitted prior to initial seeding. |
kern.random.initial_seeding.read_random_bypassed_before_seeding |
If non-zero, the random device was bypassed because the 'bypass_before_seeding' knob was enabled and a request was submitted prior to initial seeding. |
kern.random.random_sources |
List of active fast entropy sources. |
kern.random.rdrand |
rdrand (ivy) entropy source |
kern.random.rdrand.rdrand_independent_seed |
If non-zero, use more expensive and slow, but safer, seeded samples where RDSEED is not present. |
kern.random.use_chacha20_cipher |
If non-zero, use the ChaCha20 cipher for randomdev PRF (default). If zero, use AES-ICM cipher for randomdev PRF (12.x default). |
kern.randompid |
Random PID modulus. Special values: 0: disable, 1: choose random value |
kern.reboot_wait_time |
Seconds to wait before rebooting |
kern.relbase_address |
Kernel relocated base address |
kern.rpc |
RPC |
kern.rpc.tls |
TLS |
kern.rpc.tls.alerts |
Count of TLS alert messages |
kern.rpc.tls.handshake_failed |
Count of TLS failed handshakes |
kern.rpc.tls.handshake_success |
Count of TLS successful handshakes |
kern.rpc.tls.rx_msgbytes |
Count of TLS rx bytes |
kern.rpc.tls.rx_msgcnt |
Count of TLS rx messages |
kern.rpc.tls.tx_msgbytes |
Count of TLS tx bytes |
kern.rpc.tls.tx_msgcnt |
Count of TLS tx messages |
kern.rpc.unenc |
unencrypted |
kern.rpc.unenc.rx_msgbytes |
Count of non-TLS rx bytes |
kern.rpc.unenc.rx_msgcnt |
Count of non-TLS rx messages |
kern.rpc.unenc.tx_msgbytes |
Count of non-TLS tx bytes |
kern.rpc.unenc.tx_msgcnt |
Count of non-TLS tx messages |
kern.saved_ids |
Whether saved set-group/user ID is available |
kern.sched |
Scheduler |
kern.sched.affinity |
Number of hz ticks to keep thread affinity for |
kern.sched.always_steal |
Always run the stealer from the idle thread |
kern.sched.balance |
Enables the long-term load balancer |
kern.sched.balance_interval |
Average period in stathz ticks to run the long-term balancer |
kern.sched.cpusetsize |
sizeof(cpuset_t) |
kern.sched.cpusetsizemin |
The minimum size of cpuset_t allowed by the kernel |
kern.sched.idlespins |
Number of times idle thread will spin waiting for new work |
kern.sched.idlespinthresh |
Threshold before we will permit idle thread spinning |
kern.sched.interact |
Interactivity score threshold |
kern.sched.name |
Scheduler name |
kern.sched.preempt_thresh |
Maximal (lowest) priority for preemption |
kern.sched.preemption |
Kernel preemption enabled |
kern.sched.quantum |
Quantum for timeshare threads in microseconds |
kern.sched.slice |
Quantum for timeshare threads in stathz ticks |
kern.sched.static_boost |
Assign static kernel priorities to sleeping threads |
kern.sched.steal_idle |
Attempts to steal work from other cores before idling |
kern.sched.steal_thresh |
Minimum load on remote CPU before we'll steal |
kern.sched.topology_spec |
XML dump of detected CPU topology |
kern.sched.trysteal_limit |
Topological distance limit for stealing threads in sched_switch() |
kern.securelevel |
Current secure level |
kern.sgrowsiz |
Amount to grow stack on a stack fault |
kern.shutdown |
Shutdown environment |
kern.shutdown.dumpdevname |
Device(s) for kernel dumps |
kern.shutdown.kproc_shutdown_wait |
Max wait time (sec) to stop for each process |
kern.shutdown.poweroff_delay |
Delay before poweroff to write disk caches (msec) |
kern.shutdown.show_busybufs |
Show busy buffers during shutdown |
kern.sig_discard_ign |
Discard ignored signals on delivery, otherwise queue them to the target queue |
kern.sigfastblock_fetch_always |
Fetch sigfastblock word on each syscall entry for proper blocking semantic |
kern.signosys |
Send SIGSYS on return from invalid syscall |
kern.sigqueue |
POSIX real time signal |
kern.sigqueue.alloc_fail |
signals failed to be allocated |
kern.sigqueue.max_pending_per_proc |
Max pending signals per proc |
kern.sigqueue.overflow |
Number of signals overflew |
kern.sigqueue.preallocate |
Preallocated signal memory size |
kern.smp |
Kernel SMP |
kern.smp.active |
Indicates system is running in SMP mode |
kern.smp.cores |
Number of physical cores online |
kern.smp.cpus |
Number of CPUs online |
kern.smp.disabled |
SMP has been disabled from the loader |
kern.smp.forward_signal_enabled |
Forwarding of a signal to a process on a different CPU |
kern.smp.maxcpus |
Max number of CPUs that the system was compiled for. |
kern.smp.maxid |
Max CPU ID. |
kern.smp.threads_per_core |
Number of SMT threads online per core |
kern.smp.topology |
Topology override setting; 0 is default provided by hardware. |
kern.stackprot |
Stack memory permissions |
kern.sugid_coredump |
Allow setuid and setgid processes to dump core |
kern.supported_archs |
Supported architectures for binaries |
kern.suspend_blocked |
Block suspend due to a pending shutdown |
kern.sync_on_panic |
Do a sync before rebooting from a panic |
kern.threads |
thread allocation |
kern.threads.max_threads_hits |
kern.threads.max_threads_per_proc hit count |
kern.threads.max_threads_per_proc |
Limit on threads per proc |
kern.timecounter |
|
kern.timecounter.alloweddeviation |
Allowed time interval deviation in percents |
kern.timecounter.choice |
Timecounter hardware detected |
kern.timecounter.fast_gettime |
Enable fast time of day |
kern.timecounter.hardware |
Timecounter hardware selected |
kern.timecounter.invariant_tsc |
Indicates whether the TSC is P-state invariant |
kern.timecounter.nanosleep_precise |
clock_nanosleep() with CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_UPTIME and nanosleep(2) use precise clock |
kern.timecounter.smp_tsc |
Indicates whether the TSC is safe to use in SMP mode |
kern.timecounter.smp_tsc_adjust |
Try to adjust TSC on APs to match BSP |
kern.timecounter.stepwarnings |
Log time steps |
kern.timecounter.tc |
|
kern.timecounter.tc.ACPI-fast |
timecounter description |
kern.timecounter.tc.ACPI-fast.counter |
current timecounter value |
kern.timecounter.tc.ACPI-fast.frequency |
timecounter frequency |
kern.timecounter.tc.ACPI-fast.mask |
mask for implemented bits |
kern.timecounter.tc.ACPI-fast.quality |
goodness of time counter |
kern.timecounter.tc.HPET |
timecounter description |
kern.timecounter.tc.HPET.counter |
current timecounter value |
kern.timecounter.tc.HPET.frequency |
timecounter frequency |
kern.timecounter.tc.HPET.mask |
mask for implemented bits |
kern.timecounter.tc.HPET.quality |
goodness of time counter |
kern.timecounter.tc.Hyper-V |
timecounter description |
kern.timecounter.tc.Hyper-V.counter |
current timecounter value |
kern.timecounter.tc.Hyper-V.frequency |
timecounter frequency |
kern.timecounter.tc.Hyper-V.mask |
mask for implemented bits |
kern.timecounter.tc.Hyper-V.quality |
goodness of time counter |
kern.timecounter.tc.Hyper-V-TSC |
timecounter description |
kern.timecounter.tc.Hyper-V-TSC.counter |
current timecounter value |
kern.timecounter.tc.Hyper-V-TSC.frequency |
timecounter frequency |
kern.timecounter.tc.Hyper-V-TSC.mask |
mask for implemented bits |
kern.timecounter.tc.Hyper-V-TSC.quality |
goodness of time counter |
kern.timecounter.tc.TSC |
timecounter description |
kern.timecounter.tc.TSC.counter |
current timecounter value |
kern.timecounter.tc.TSC.frequency |
timecounter frequency |
kern.timecounter.tc.TSC.mask |
mask for implemented bits |
kern.timecounter.tc.TSC.quality |
goodness of time counter |
kern.timecounter.tc.TSC-low |
timecounter description |
kern.timecounter.tc.TSC-low.counter |
current timecounter value |
kern.timecounter.tc.TSC-low.frequency |
timecounter frequency |
kern.timecounter.tc.TSC-low.mask |
mask for implemented bits |
kern.timecounter.tc.TSC-low.quality |
goodness of time counter |
kern.timecounter.tc.i8254 |
timecounter description |
kern.timecounter.tc.i8254.counter |
current timecounter value |
kern.timecounter.tc.i8254.frequency |
timecounter frequency |
kern.timecounter.tc.i8254.mask |
mask for implemented bits |
kern.timecounter.tc.i8254.quality |
goodness of time counter |
kern.timecounter.tc.kvmclock |
timecounter description |
kern.timecounter.tc.kvmclock.counter |
current timecounter value |
kern.timecounter.tc.kvmclock.frequency |
timecounter frequency |
kern.timecounter.tc.kvmclock.mask |
mask for implemented bits |
kern.timecounter.tc.kvmclock.quality |
goodness of time counter |
kern.timecounter.tick |
Approximate number of hardclock ticks in a millisecond |
kern.timecounter.timehands_count |
Count of timehands in rotation |
kern.timecounter.tsc_shift |
Shift to pre-apply for the maximum TSC frequency |
kern.trap_enotcap |
Deliver SIGTRAP on ECAPMODE and ENOTCAPABLE |
kern.tty_drainwait |
Default output drain timeout in seconds |
kern.tty_info_kstacks |
Adjust format of kernel stack(9) traces on ^T (tty info): 0 - disabled; 1 - long; 2 - compact |
kern.tty_inq_flush_secure |
Zero buffers while flushing |
kern.tty_nin |
Total amount of bytes received |
kern.tty_nout |
Total amount of bytes transmitted |
kern.ttys |
List of TTYs |
kern.usrstack |
Top of process stack |
kern.version |
Kernel version |
kern.vm_guest |
Virtual machine guest detected? |
kern.vt |
vt(9) parameters |
kern.vt.deadtimer |
Time to wait busy process in VT_PROCESS mode |
kern.vt.debug |
vt(9) debug level |
kern.vt.enable_altgr |
Enable AltGr key (Do not assume R.Alt as Alt) |
kern.vt.enable_bell |
Enable bell |
kern.vt.kbd_debug |
Enable key combination to enter debugger. See kbdmap(5) to configure (typically Ctrl-Alt-Esc). |
kern.vt.kbd_halt |
Enable halt keyboard combination. See kbdmap(5) to configure. |
kern.vt.kbd_panic |
Enable request to panic. See kbdmap(5) to configure. |
kern.vt.kbd_poweroff |
Enable Power Off keyboard combination. See kbdmap(5) to configure. |
kern.vt.kbd_reboot |
Enable reboot keyboard combination. See kbdmap(5) to configure (typically Ctrl-Alt-Delete). |
kern.vt.slow_down |
Non-zero make console slower and synchronous. |
kern.vt.splash_cpu |
Show logo CPUs during boot |
kern.vt.splash_cpu_duration |
Hide logos after (seconds) |
kern.vt.splash_cpu_style |
Draw logo style (0 = Alternate beastie, 1 = Beastie, 2 = Orb) |
kern.vt.splash_ncpu |
Override number of logos displayed (0 = do not override) |
kern.vt.suspendswitch |
Switch to VT0 before suspend |
kern.vty |
Console vty driver |
kern.wait_dequeue_sigchld |
Dequeue SIGCHLD on wait(2) for live process |
kstat |
Kernel statistics |
kstat.zfs |
|
kstat.zfs.data01 |
|
kstat.zfs.data01.dataset |
|
kstat.zfs.data01.dataset.objset-[num] |
|
kstat.zfs.data01.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.data01.dataset.objset-[num].nread |
nread |
kstat.zfs.data01.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.data01.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.data01.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.data01.dataset.objset-[num].reads |
reads |
kstat.zfs.data01.dataset.objset-[num].writes |
writes |
kstat.zfs.data01.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.data01.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.data01.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.data01.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.data01.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.data01.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.data01.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.data01.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.data01.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.data01.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.data01.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.data01.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.data01.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.data01.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.data01.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.data01.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.data01.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.data01.misc |
|
kstat.zfs.data01.misc.dmu_tx_assign |
|
kstat.zfs.data01.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.data01.misc.guid |
guid |
kstat.zfs.data01.misc.iostats |
|
kstat.zfs.data01.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.data01.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.data01.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.data01.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.data01.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.data01.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.data01.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.data01.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.data01.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.data01.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.data01.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.data01.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.data01.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.data01.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.data01.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.data01.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.data01.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.data01.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.data01.misc.state |
state |
kstat.zfs.data01.multihost |
multihost |
kstat.zfs.data01.reads |
reads |
kstat.zfs.data01.txgs |
txgs |
kstat.zfs.data02 |
|
kstat.zfs.data02.dataset |
|
kstat.zfs.data02.dataset.objset-[num] |
|
kstat.zfs.data02.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.data02.dataset.objset-[num].nread |
nread |
kstat.zfs.data02.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.data02.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.data02.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.data02.dataset.objset-[num].reads |
reads |
kstat.zfs.data02.dataset.objset-[num].writes |
writes |
kstat.zfs.data02.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.data02.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.data02.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.data02.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.data02.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.data02.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.data02.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.data02.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.data02.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.data02.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.data02.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.data02.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.data02.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.data02.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.data02.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.data02.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.data02.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.data02.misc |
|
kstat.zfs.data02.misc.dmu_tx_assign |
|
kstat.zfs.data02.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.data02.misc.guid |
guid |
kstat.zfs.data02.misc.iostats |
|
kstat.zfs.data02.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.data02.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.data02.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.data02.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.data02.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.data02.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.data02.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.data02.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.data02.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.data02.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.data02.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.data02.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.data02.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.data02.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.data02.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.data02.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.data02.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.data02.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.data02.misc.state |
state |
kstat.zfs.data02.multihost |
multihost |
kstat.zfs.data02.reads |
reads |
kstat.zfs.data02.txgs |
txgs |
kstat.zfs.data03 |
|
kstat.zfs.data03.dataset |
|
kstat.zfs.data03.dataset.objset-[num] |
|
kstat.zfs.data03.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.data03.dataset.objset-[num].nread |
nread |
kstat.zfs.data03.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.data03.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.data03.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.data03.dataset.objset-[num].reads |
reads |
kstat.zfs.data03.dataset.objset-[num].writes |
writes |
kstat.zfs.data03.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.data03.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.data03.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.data03.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.data03.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.data03.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.data03.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.data03.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.data03.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.data03.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.data03.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.data03.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.data03.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.data03.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.data03.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.data03.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.data03.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.data03.misc |
|
kstat.zfs.data03.misc.dmu_tx_assign |
|
kstat.zfs.data03.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.data03.misc.guid |
guid |
kstat.zfs.data03.misc.iostats |
|
kstat.zfs.data03.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.data03.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.data03.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.data03.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.data03.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.data03.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.data03.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.data03.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.data03.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.data03.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.data03.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.data03.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.data03.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.data03.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.data03.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.data03.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.data03.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.data03.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.data03.misc.state |
state |
kstat.zfs.data03.multihost |
multihost |
kstat.zfs.data03.reads |
reads |
kstat.zfs.data03.txgs |
txgs |
kstat.zfs.data04 |
|
kstat.zfs.data04.dataset |
|
kstat.zfs.data04.dataset.objset-[num] |
|
kstat.zfs.data04.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.data04.dataset.objset-[num].nread |
nread |
kstat.zfs.data04.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.data04.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.data04.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.data04.dataset.objset-[num].reads |
reads |
kstat.zfs.data04.dataset.objset-[num].writes |
writes |
kstat.zfs.data04.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.data04.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.data04.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.data04.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.data04.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.data04.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.data04.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.data04.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.data04.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.data04.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.data04.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.data04.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.data04.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.data04.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.data04.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.data04.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.data04.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.data04.misc |
|
kstat.zfs.data04.misc.dmu_tx_assign |
|
kstat.zfs.data04.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.data04.misc.guid |
guid |
kstat.zfs.data04.misc.iostats |
|
kstat.zfs.data04.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.data04.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.data04.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.data04.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.data04.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.data04.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.data04.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.data04.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.data04.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.data04.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.data04.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.data04.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.data04.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.data04.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.data04.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.data04.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.data04.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.data04.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.data04.misc.state |
state |
kstat.zfs.data04.multihost |
multihost |
kstat.zfs.data04.reads |
reads |
kstat.zfs.data04.txgs |
txgs |
kstat.zfs.main_tank |
|
kstat.zfs.main_tank.dataset |
|
kstat.zfs.main_tank.dataset.objset-[num] |
|
kstat.zfs.main_tank.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.main_tank.dataset.objset-[num].nread |
nread |
kstat.zfs.main_tank.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.main_tank.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.main_tank.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.main_tank.dataset.objset-[num].reads |
reads |
kstat.zfs.main_tank.dataset.objset-[num].writes |
writes |
kstat.zfs.main_tank.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.main_tank.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.main_tank.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.main_tank.misc |
|
kstat.zfs.main_tank.misc.dmu_tx_assign |
|
kstat.zfs.main_tank.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.main_tank.misc.guid |
guid |
kstat.zfs.main_tank.misc.iostats |
|
kstat.zfs.main_tank.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.main_tank.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.main_tank.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.main_tank.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.main_tank.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.main_tank.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.main_tank.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.main_tank.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.main_tank.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.main_tank.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.main_tank.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.main_tank.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.main_tank.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.main_tank.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.main_tank.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.main_tank.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.main_tank.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.main_tank.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.main_tank.misc.state |
state |
kstat.zfs.main_tank.multihost |
multihost |
kstat.zfs.main_tank.reads |
reads |
kstat.zfs.main_tank.txgs |
txgs |
kstat.zfs.misc |
|
kstat.zfs.misc.abdstats |
|
kstat.zfs.misc.abdstats.linear_cnt |
linear_cnt |
kstat.zfs.misc.abdstats.linear_data_size |
linear_data_size |
kstat.zfs.misc.abdstats.scatter_chunk_waste |
scatter_chunk_waste |
kstat.zfs.misc.abdstats.scatter_cnt |
scatter_cnt |
kstat.zfs.misc.abdstats.scatter_data_size |
scatter_data_size |
kstat.zfs.misc.abdstats.struct_size |
struct_size |
kstat.zfs.misc.arcstats |
|
kstat.zfs.misc.arcstats.abd_chunk_waste_size |
abd_chunk_waste_size |
kstat.zfs.misc.arcstats.access_skip |
access_skip |
kstat.zfs.misc.arcstats.anon_data |
anon_data |
kstat.zfs.misc.arcstats.anon_evictable_data |
anon_evictable_data |
kstat.zfs.misc.arcstats.anon_evictable_metadata |
anon_evictable_metadata |
kstat.zfs.misc.arcstats.anon_metadata |
anon_metadata |
kstat.zfs.misc.arcstats.anon_size |
anon_size |
kstat.zfs.misc.arcstats.arc_dnode_limit |
arc_dnode_limit |
kstat.zfs.misc.arcstats.arc_loaned_bytes |
arc_loaned_bytes |
kstat.zfs.misc.arcstats.arc_meta_used |
arc_meta_used |
kstat.zfs.misc.arcstats.arc_need_free |
arc_need_free |
kstat.zfs.misc.arcstats.arc_no_grow |
arc_no_grow |
kstat.zfs.misc.arcstats.arc_prune |
arc_prune |
kstat.zfs.misc.arcstats.arc_raw_size |
arc_raw_size |
kstat.zfs.misc.arcstats.arc_sys_free |
arc_sys_free |
kstat.zfs.misc.arcstats.arc_tempreserve |
arc_tempreserve |
kstat.zfs.misc.arcstats.async_upgrade_sync |
async_upgrade_sync |
kstat.zfs.misc.arcstats.bonus_size |
bonus_size |
kstat.zfs.misc.arcstats.c |
c |
kstat.zfs.misc.arcstats.c_max |
c_max |
kstat.zfs.misc.arcstats.c_min |
c_min |
kstat.zfs.misc.arcstats.cached_only_in_progress |
cached_only_in_progress |
kstat.zfs.misc.arcstats.compressed_size |
compressed_size |
kstat.zfs.misc.arcstats.data_size |
data_size |
kstat.zfs.misc.arcstats.dbuf_size |
dbuf_size |
kstat.zfs.misc.arcstats.deleted |
deleted |
kstat.zfs.misc.arcstats.demand_data_hits |
demand_data_hits |
kstat.zfs.misc.arcstats.demand_data_iohits |
demand_data_iohits |
kstat.zfs.misc.arcstats.demand_data_misses |
demand_data_misses |
kstat.zfs.misc.arcstats.demand_hit_predictive_prefetch |
demand_hit_predictive_prefetch |
kstat.zfs.misc.arcstats.demand_hit_prescient_prefetch |
demand_hit_prescient_prefetch |
kstat.zfs.misc.arcstats.demand_iohit_predictive_prefetch |
demand_iohit_predictive_prefetch |
kstat.zfs.misc.arcstats.demand_iohit_prescient_prefetch |
demand_iohit_prescient_prefetch |
kstat.zfs.misc.arcstats.demand_metadata_hits |
demand_metadata_hits |
kstat.zfs.misc.arcstats.demand_metadata_iohits |
demand_metadata_iohits |
kstat.zfs.misc.arcstats.demand_metadata_misses |
demand_metadata_misses |
kstat.zfs.misc.arcstats.dnode_size |
dnode_size |
kstat.zfs.misc.arcstats.evict_l2_cached |
evict_l2_cached |
kstat.zfs.misc.arcstats.evict_l2_eligible |
evict_l2_eligible |
kstat.zfs.misc.arcstats.evict_l2_eligible_mfu |
evict_l2_eligible_mfu |
kstat.zfs.misc.arcstats.evict_l2_eligible_mru |
evict_l2_eligible_mru |
kstat.zfs.misc.arcstats.evict_l2_ineligible |
evict_l2_ineligible |
kstat.zfs.misc.arcstats.evict_l2_skip |
evict_l2_skip |
kstat.zfs.misc.arcstats.evict_not_enough |
evict_not_enough |
kstat.zfs.misc.arcstats.evict_skip |
evict_skip |
kstat.zfs.misc.arcstats.hash_chain_max |
hash_chain_max |
kstat.zfs.misc.arcstats.hash_chains |
hash_chains |
kstat.zfs.misc.arcstats.hash_collisions |
hash_collisions |
kstat.zfs.misc.arcstats.hash_elements |
hash_elements |
kstat.zfs.misc.arcstats.hash_elements_max |
hash_elements_max |
kstat.zfs.misc.arcstats.hdr_size |
hdr_size |
kstat.zfs.misc.arcstats.hits |
hits |
kstat.zfs.misc.arcstats.iohits |
iohits |
kstat.zfs.misc.arcstats.l2_abort_lowmem |
l2_abort_lowmem |
kstat.zfs.misc.arcstats.l2_asize |
l2_asize |
kstat.zfs.misc.arcstats.l2_bufc_data_asize |
l2_bufc_data_asize |
kstat.zfs.misc.arcstats.l2_bufc_metadata_asize |
l2_bufc_metadata_asize |
kstat.zfs.misc.arcstats.l2_cksum_bad |
l2_cksum_bad |
kstat.zfs.misc.arcstats.l2_data_to_meta_ratio |
l2_data_to_meta_ratio |
kstat.zfs.misc.arcstats.l2_evict_l1cached |
l2_evict_l1cached |
kstat.zfs.misc.arcstats.l2_evict_lock_retry |
l2_evict_lock_retry |
kstat.zfs.misc.arcstats.l2_evict_reading |
l2_evict_reading |
kstat.zfs.misc.arcstats.l2_feeds |
l2_feeds |
kstat.zfs.misc.arcstats.l2_free_on_write |
l2_free_on_write |
kstat.zfs.misc.arcstats.l2_hdr_size |
l2_hdr_size |
kstat.zfs.misc.arcstats.l2_hits |
l2_hits |
kstat.zfs.misc.arcstats.l2_io_error |
l2_io_error |
kstat.zfs.misc.arcstats.l2_log_blk_asize |
l2_log_blk_asize |
kstat.zfs.misc.arcstats.l2_log_blk_avg_asize |
l2_log_blk_avg_asize |
kstat.zfs.misc.arcstats.l2_log_blk_count |
l2_log_blk_count |
kstat.zfs.misc.arcstats.l2_log_blk_writes |
l2_log_blk_writes |
kstat.zfs.misc.arcstats.l2_mfu_asize |
l2_mfu_asize |
kstat.zfs.misc.arcstats.l2_misses |
l2_misses |
kstat.zfs.misc.arcstats.l2_mru_asize |
l2_mru_asize |
kstat.zfs.misc.arcstats.l2_prefetch_asize |
l2_prefetch_asize |
kstat.zfs.misc.arcstats.l2_read_bytes |
l2_read_bytes |
kstat.zfs.misc.arcstats.l2_rebuild_asize |
l2_rebuild_asize |
kstat.zfs.misc.arcstats.l2_rebuild_bufs |
l2_rebuild_bufs |
kstat.zfs.misc.arcstats.l2_rebuild_bufs_precached |
l2_rebuild_bufs_precached |
kstat.zfs.misc.arcstats.l2_rebuild_cksum_lb_errors |
l2_rebuild_cksum_lb_errors |
kstat.zfs.misc.arcstats.l2_rebuild_dh_errors |
l2_rebuild_dh_errors |
kstat.zfs.misc.arcstats.l2_rebuild_io_errors |
l2_rebuild_io_errors |
kstat.zfs.misc.arcstats.l2_rebuild_log_blks |
l2_rebuild_log_blks |
kstat.zfs.misc.arcstats.l2_rebuild_lowmem |
l2_rebuild_lowmem |
kstat.zfs.misc.arcstats.l2_rebuild_size |
l2_rebuild_size |
kstat.zfs.misc.arcstats.l2_rebuild_success |
l2_rebuild_success |
kstat.zfs.misc.arcstats.l2_rebuild_unsupported |
l2_rebuild_unsupported |
kstat.zfs.misc.arcstats.l2_rw_clash |
l2_rw_clash |
kstat.zfs.misc.arcstats.l2_size |
l2_size |
kstat.zfs.misc.arcstats.l2_write_bytes |
l2_write_bytes |
kstat.zfs.misc.arcstats.l2_writes_done |
l2_writes_done |
kstat.zfs.misc.arcstats.l2_writes_error |
l2_writes_error |
kstat.zfs.misc.arcstats.l2_writes_lock_retry |
l2_writes_lock_retry |
kstat.zfs.misc.arcstats.l2_writes_sent |
l2_writes_sent |
kstat.zfs.misc.arcstats.memory_all_bytes |
memory_all_bytes |
kstat.zfs.misc.arcstats.memory_available_bytes |
memory_available_bytes |
kstat.zfs.misc.arcstats.memory_direct_count |
memory_direct_count |
kstat.zfs.misc.arcstats.memory_free_bytes |
memory_free_bytes |
kstat.zfs.misc.arcstats.memory_indirect_count |
memory_indirect_count |
kstat.zfs.misc.arcstats.memory_throttle_count |
memory_throttle_count |
kstat.zfs.misc.arcstats.meta |
meta |
kstat.zfs.misc.arcstats.metadata_size |
metadata_size |
kstat.zfs.misc.arcstats.mfu_data |
mfu_data |
kstat.zfs.misc.arcstats.mfu_evictable_data |
mfu_evictable_data |
kstat.zfs.misc.arcstats.mfu_evictable_metadata |
mfu_evictable_metadata |
kstat.zfs.misc.arcstats.mfu_ghost_data |
mfu_ghost_data |
kstat.zfs.misc.arcstats.mfu_ghost_evictable_data |
mfu_ghost_evictable_data |
kstat.zfs.misc.arcstats.mfu_ghost_evictable_metadata |
mfu_ghost_evictable_metadata |
kstat.zfs.misc.arcstats.mfu_ghost_hits |
mfu_ghost_hits |
kstat.zfs.misc.arcstats.mfu_ghost_metadata |
mfu_ghost_metadata |
kstat.zfs.misc.arcstats.mfu_ghost_size |
mfu_ghost_size |
kstat.zfs.misc.arcstats.mfu_hits |
mfu_hits |
kstat.zfs.misc.arcstats.mfu_metadata |
mfu_metadata |
kstat.zfs.misc.arcstats.mfu_size |
mfu_size |
kstat.zfs.misc.arcstats.misses |
misses |
kstat.zfs.misc.arcstats.mru_data |
mru_data |
kstat.zfs.misc.arcstats.mru_evictable_data |
mru_evictable_data |
kstat.zfs.misc.arcstats.mru_evictable_metadata |
mru_evictable_metadata |
kstat.zfs.misc.arcstats.mru_ghost_data |
mru_ghost_data |
kstat.zfs.misc.arcstats.mru_ghost_evictable_data |
mru_ghost_evictable_data |
kstat.zfs.misc.arcstats.mru_ghost_evictable_metadata |
mru_ghost_evictable_metadata |
kstat.zfs.misc.arcstats.mru_ghost_hits |
mru_ghost_hits |
kstat.zfs.misc.arcstats.mru_ghost_metadata |
mru_ghost_metadata |
kstat.zfs.misc.arcstats.mru_ghost_size |
mru_ghost_size |
kstat.zfs.misc.arcstats.mru_hits |
mru_hits |
kstat.zfs.misc.arcstats.mru_metadata |
mru_metadata |
kstat.zfs.misc.arcstats.mru_size |
mru_size |
kstat.zfs.misc.arcstats.mutex_miss |
mutex_miss |
kstat.zfs.misc.arcstats.other_size |
other_size |
kstat.zfs.misc.arcstats.overhead_size |
overhead_size |
kstat.zfs.misc.arcstats.pd |
pd |
kstat.zfs.misc.arcstats.pm |
pm |
kstat.zfs.misc.arcstats.predictive_prefetch |
predictive_prefetch |
kstat.zfs.misc.arcstats.prefetch_data_hits |
prefetch_data_hits |
kstat.zfs.misc.arcstats.prefetch_data_iohits |
prefetch_data_iohits |
kstat.zfs.misc.arcstats.prefetch_data_misses |
prefetch_data_misses |
kstat.zfs.misc.arcstats.prefetch_metadata_hits |
prefetch_metadata_hits |
kstat.zfs.misc.arcstats.prefetch_metadata_iohits |
prefetch_metadata_iohits |
kstat.zfs.misc.arcstats.prefetch_metadata_misses |
prefetch_metadata_misses |
kstat.zfs.misc.arcstats.prescient_prefetch |
prescient_prefetch |
kstat.zfs.misc.arcstats.size |
size |
kstat.zfs.misc.arcstats.uncached_data |
uncached_data |
kstat.zfs.misc.arcstats.uncached_evictable_data |
uncached_evictable_data |
kstat.zfs.misc.arcstats.uncached_evictable_metadata |
uncached_evictable_metadata |
kstat.zfs.misc.arcstats.uncached_hits |
uncached_hits |
kstat.zfs.misc.arcstats.uncached_metadata |
uncached_metadata |
kstat.zfs.misc.arcstats.uncached_size |
uncached_size |
kstat.zfs.misc.arcstats.uncompressed_size |
uncompressed_size |
kstat.zfs.misc.brtstats |
|
kstat.zfs.misc.brtstats.addref_entry_in_memory |
addref_entry_in_memory |
kstat.zfs.misc.brtstats.addref_entry_not_on_disk |
addref_entry_not_on_disk |
kstat.zfs.misc.brtstats.addref_entry_on_disk |
addref_entry_on_disk |
kstat.zfs.misc.brtstats.addref_entry_read_lost_race |
addref_entry_read_lost_race |
kstat.zfs.misc.brtstats.decref_entry_in_memory |
decref_entry_in_memory |
kstat.zfs.misc.brtstats.decref_entry_loaded_from_disk |
decref_entry_loaded_from_disk |
kstat.zfs.misc.brtstats.decref_entry_not_in_memory |
decref_entry_not_in_memory |
kstat.zfs.misc.brtstats.decref_entry_not_on_disk |
decref_entry_not_on_disk |
kstat.zfs.misc.brtstats.decref_entry_read_lost_race |
decref_entry_read_lost_race |
kstat.zfs.misc.brtstats.decref_entry_still_referenced |
decref_entry_still_referenced |
kstat.zfs.misc.brtstats.decref_free_data_later |
decref_free_data_later |
kstat.zfs.misc.brtstats.decref_free_data_now |
decref_free_data_now |
kstat.zfs.misc.brtstats.decref_no_entry |
decref_no_entry |
kstat.zfs.misc.chksum_bench |
chksum_bench |
kstat.zfs.misc.dbgmsg |
dbgmsg |
kstat.zfs.misc.dbufs |
dbufs |
kstat.zfs.misc.dbufstats |
|
kstat.zfs.misc.dbufstats.cache_count |
cache_count |
kstat.zfs.misc.dbufstats.cache_hiwater_bytes |
cache_hiwater_bytes |
kstat.zfs.misc.dbufstats.cache_level_[num] |
cache_level_0 |
kstat.zfs.misc.dbufstats.cache_level_[num]_bytes |
cache_level_0_bytes |
kstat.zfs.misc.dbufstats.cache_lowater_bytes |
cache_lowater_bytes |
kstat.zfs.misc.dbufstats.cache_size_bytes |
cache_size_bytes |
kstat.zfs.misc.dbufstats.cache_size_bytes_max |
cache_size_bytes_max |
kstat.zfs.misc.dbufstats.cache_target_bytes |
cache_target_bytes |
kstat.zfs.misc.dbufstats.cache_total_evicts |
cache_total_evicts |
kstat.zfs.misc.dbufstats.hash_chain_max |
hash_chain_max |
kstat.zfs.misc.dbufstats.hash_chains |
hash_chains |
kstat.zfs.misc.dbufstats.hash_collisions |
hash_collisions |
kstat.zfs.misc.dbufstats.hash_elements |
hash_elements |
kstat.zfs.misc.dbufstats.hash_elements_max |
hash_elements_max |
kstat.zfs.misc.dbufstats.hash_hits |
hash_hits |
kstat.zfs.misc.dbufstats.hash_insert_race |
hash_insert_race |
kstat.zfs.misc.dbufstats.hash_misses |
hash_misses |
kstat.zfs.misc.dbufstats.hash_mutex_count |
hash_mutex_count |
kstat.zfs.misc.dbufstats.hash_table_count |
hash_table_count |
kstat.zfs.misc.dbufstats.metadata_cache_count |
metadata_cache_count |
kstat.zfs.misc.dbufstats.metadata_cache_overflow |
metadata_cache_overflow |
kstat.zfs.misc.dbufstats.metadata_cache_size_bytes |
metadata_cache_size_bytes |
kstat.zfs.misc.dbufstats.metadata_cache_size_bytes_max |
metadata_cache_size_bytes_max |
kstat.zfs.misc.dmu_tx |
|
kstat.zfs.misc.dmu_tx.dmu_tx_assigned |
dmu_tx_assigned |
kstat.zfs.misc.dmu_tx.dmu_tx_delay |
dmu_tx_delay |
kstat.zfs.misc.dmu_tx.dmu_tx_dirty_delay |
dmu_tx_dirty_delay |
kstat.zfs.misc.dmu_tx.dmu_tx_dirty_frees_delay |
dmu_tx_dirty_frees_delay |
kstat.zfs.misc.dmu_tx.dmu_tx_dirty_over_max |
dmu_tx_dirty_over_max |
kstat.zfs.misc.dmu_tx.dmu_tx_dirty_throttle |
dmu_tx_dirty_throttle |
kstat.zfs.misc.dmu_tx.dmu_tx_error |
dmu_tx_error |
kstat.zfs.misc.dmu_tx.dmu_tx_group |
dmu_tx_group |
kstat.zfs.misc.dmu_tx.dmu_tx_memory_reclaim |
dmu_tx_memory_reclaim |
kstat.zfs.misc.dmu_tx.dmu_tx_memory_reserve |
dmu_tx_memory_reserve |
kstat.zfs.misc.dmu_tx.dmu_tx_quota |
dmu_tx_quota |
kstat.zfs.misc.dmu_tx.dmu_tx_suspended |
dmu_tx_suspended |
kstat.zfs.misc.dmu_tx.dmu_tx_wrlog_delay |
dmu_tx_wrlog_delay |
kstat.zfs.misc.dnodestats |
|
kstat.zfs.misc.dnodestats.dnode_alloc_next_block |
dnode_alloc_next_block |
kstat.zfs.misc.dnodestats.dnode_alloc_next_chunk |
dnode_alloc_next_chunk |
kstat.zfs.misc.dnodestats.dnode_alloc_race |
dnode_alloc_race |
kstat.zfs.misc.dnodestats.dnode_allocate |
dnode_allocate |
kstat.zfs.misc.dnodestats.dnode_buf_evict |
dnode_buf_evict |
kstat.zfs.misc.dnodestats.dnode_free_interior_lock_retry |
dnode_free_interior_lock_retry |
kstat.zfs.misc.dnodestats.dnode_hold_alloc_hits |
dnode_hold_alloc_hits |
kstat.zfs.misc.dnodestats.dnode_hold_alloc_interior |
dnode_hold_alloc_interior |
kstat.zfs.misc.dnodestats.dnode_hold_alloc_lock_misses |
dnode_hold_alloc_lock_misses |
kstat.zfs.misc.dnodestats.dnode_hold_alloc_lock_retry |
dnode_hold_alloc_lock_retry |
kstat.zfs.misc.dnodestats.dnode_hold_alloc_misses |
dnode_hold_alloc_misses |
kstat.zfs.misc.dnodestats.dnode_hold_alloc_type_none |
dnode_hold_alloc_type_none |
kstat.zfs.misc.dnodestats.dnode_hold_dbuf_hold |
dnode_hold_dbuf_hold |
kstat.zfs.misc.dnodestats.dnode_hold_dbuf_read |
dnode_hold_dbuf_read |
kstat.zfs.misc.dnodestats.dnode_hold_free_hits |
dnode_hold_free_hits |
kstat.zfs.misc.dnodestats.dnode_hold_free_lock_misses |
dnode_hold_free_lock_misses |
kstat.zfs.misc.dnodestats.dnode_hold_free_lock_retry |
dnode_hold_free_lock_retry |
kstat.zfs.misc.dnodestats.dnode_hold_free_misses |
dnode_hold_free_misses |
kstat.zfs.misc.dnodestats.dnode_hold_free_overflow |
dnode_hold_free_overflow |
kstat.zfs.misc.dnodestats.dnode_hold_free_refcount |
dnode_hold_free_refcount |
kstat.zfs.misc.dnodestats.dnode_move_active |
dnode_move_active |
kstat.zfs.misc.dnodestats.dnode_move_handle |
dnode_move_handle |
kstat.zfs.misc.dnodestats.dnode_move_invalid |
dnode_move_invalid |
kstat.zfs.misc.dnodestats.dnode_move_recheck1 |
dnode_move_recheck1 |
kstat.zfs.misc.dnodestats.dnode_move_recheck2 |
dnode_move_recheck2 |
kstat.zfs.misc.dnodestats.dnode_move_rwlock |
dnode_move_rwlock |
kstat.zfs.misc.dnodestats.dnode_move_special |
dnode_move_special |
kstat.zfs.misc.dnodestats.dnode_reallocate |
dnode_reallocate |
kstat.zfs.misc.fletcher_4_bench |
fletcher_4_bench |
kstat.zfs.misc.fm |
|
kstat.zfs.misc.fm.erpt-dropped |
erpt-dropped |
kstat.zfs.misc.fm.erpt-duplicates |
erpt-duplicates |
kstat.zfs.misc.fm.erpt-set-failed |
erpt-set-failed |
kstat.zfs.misc.fm.fmri-set-failed |
fmri-set-failed |
kstat.zfs.misc.fm.payload-set-failed |
payload-set-failed |
kstat.zfs.misc.import_progress |
import_progress |
kstat.zfs.misc.metaslab_stats |
|
kstat.zfs.misc.metaslab_stats.reload_tree |
reload_tree |
kstat.zfs.misc.metaslab_stats.too_many_tries |
too_many_tries |
kstat.zfs.misc.metaslab_stats.trace_over_limit |
trace_over_limit |
kstat.zfs.misc.metaslab_stats.try_hard |
try_hard |
kstat.zfs.misc.vdev_mirror_stats |
|
kstat.zfs.misc.vdev_mirror_stats.non_rotating_linear |
non_rotating_linear |
kstat.zfs.misc.vdev_mirror_stats.non_rotating_seek |
non_rotating_seek |
kstat.zfs.misc.vdev_mirror_stats.preferred_found |
preferred_found |
kstat.zfs.misc.vdev_mirror_stats.preferred_not_found |
preferred_not_found |
kstat.zfs.misc.vdev_mirror_stats.rotating_linear |
rotating_linear |
kstat.zfs.misc.vdev_mirror_stats.rotating_offset |
rotating_offset |
kstat.zfs.misc.vdev_mirror_stats.rotating_seek |
rotating_seek |
kstat.zfs.misc.vdev_raidz_bench |
vdev_raidz_bench |
kstat.zfs.misc.zfetchstats |
|
kstat.zfs.misc.zfetchstats.future |
future |
kstat.zfs.misc.zfetchstats.hits |
hits |
kstat.zfs.misc.zfetchstats.io_active |
io_active |
kstat.zfs.misc.zfetchstats.io_issued |
io_issued |
kstat.zfs.misc.zfetchstats.max_streams |
max_streams |
kstat.zfs.misc.zfetchstats.misses |
misses |
kstat.zfs.misc.zfetchstats.past |
past |
kstat.zfs.misc.zfetchstats.stride |
stride |
kstat.zfs.misc.zil |
|
kstat.zfs.misc.zil.zil_commit_count |
zil_commit_count |
kstat.zfs.misc.zil.zil_commit_error_count |
zil_commit_error_count |
kstat.zfs.misc.zil.zil_commit_stall_count |
zil_commit_stall_count |
kstat.zfs.misc.zil.zil_commit_suspend_count |
zil_commit_suspend_count |
kstat.zfs.misc.zil.zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.misc.zil.zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.misc.zil.zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.misc.zil.zil_itx_count |
zil_itx_count |
kstat.zfs.misc.zil.zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.misc.zil.zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.misc.zil.zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.misc.zil.zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.misc.zil.zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.misc.zil.zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.misc.zil.zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.misc.zil.zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.misc.zil.zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.misc.zil.zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.misc.zil.zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.misc.zil.zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.misc.zio_stats |
|
kstat.zfs.misc.zio_stats.alloc_class_fallbacks |
alloc_class_fallbacks |
kstat.zfs.misc.zio_stats.gang_multilevel |
gang_multilevel |
kstat.zfs.misc.zio_stats.gang_writes |
gang_writes |
kstat.zfs.misc.zio_stats.total_allocations |
total_allocations |
kstat.zfs.misc.zstd |
|
kstat.zfs.misc.zstd.alloc_fail |
alloc_fail |
kstat.zfs.misc.zstd.alloc_fallback |
alloc_fallback |
kstat.zfs.misc.zstd.buffers |
buffers |
kstat.zfs.misc.zstd.compress_alloc_fail |
compress_alloc_fail |
kstat.zfs.misc.zstd.compress_failed |
compress_failed |
kstat.zfs.misc.zstd.compress_level_invalid |
compress_level_invalid |
kstat.zfs.misc.zstd.decompress_alloc_fail |
decompress_alloc_fail |
kstat.zfs.misc.zstd.decompress_failed |
decompress_failed |
kstat.zfs.misc.zstd.decompress_header_invalid |
decompress_header_invalid |
kstat.zfs.misc.zstd.decompress_level_invalid |
decompress_level_invalid |
kstat.zfs.misc.zstd.lz4pass_allowed |
lz4pass_allowed |
kstat.zfs.misc.zstd.lz4pass_rejected |
lz4pass_rejected |
kstat.zfs.misc.zstd.passignored |
passignored |
kstat.zfs.misc.zstd.passignored_size |
passignored_size |
kstat.zfs.misc.zstd.size |
size |
kstat.zfs.misc.zstd.zstdpass_allowed |
zstdpass_allowed |
kstat.zfs.misc.zstd.zstdpass_rejected |
zstdpass_rejected |
kstat.zfs.nvtank |
|
kstat.zfs.nvtank.dataset |
|
kstat.zfs.nvtank.dataset.objset-[num] |
|
kstat.zfs.nvtank.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.nvtank.dataset.objset-[num].nread |
nread |
kstat.zfs.nvtank.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.nvtank.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.nvtank.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.nvtank.dataset.objset-[num].reads |
reads |
kstat.zfs.nvtank.dataset.objset-[num].writes |
writes |
kstat.zfs.nvtank.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.nvtank.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.nvtank.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.nvtank.misc |
|
kstat.zfs.nvtank.misc.dmu_tx_assign |
|
kstat.zfs.nvtank.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.nvtank.misc.guid |
guid |
kstat.zfs.nvtank.misc.iostats |
|
kstat.zfs.nvtank.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.nvtank.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.nvtank.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.nvtank.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.nvtank.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.nvtank.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.nvtank.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.nvtank.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.nvtank.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.nvtank.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.nvtank.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.nvtank.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.nvtank.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.nvtank.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.nvtank.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.nvtank.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.nvtank.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.nvtank.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.nvtank.misc.state |
state |
kstat.zfs.nvtank.multihost |
multihost |
kstat.zfs.nvtank.reads |
reads |
kstat.zfs.nvtank.txgs |
txgs |
kstat.zfs.system |
|
kstat.zfs.system.dataset |
|
kstat.zfs.system.dataset.objset-[num] |
|
kstat.zfs.system.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.system.dataset.objset-[num].nread |
nread |
kstat.zfs.system.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.system.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.system.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.system.dataset.objset-[num].reads |
reads |
kstat.zfs.system.dataset.objset-[num].writes |
writes |
kstat.zfs.system.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.system.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.system.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.system.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.system.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.system.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.system.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.system.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.system.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.system.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.system.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.system.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.system.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.system.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.system.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.system.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.system.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.system.misc |
|
kstat.zfs.system.misc.dmu_tx_assign |
|
kstat.zfs.system.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.system.misc.guid |
guid |
kstat.zfs.system.misc.iostats |
|
kstat.zfs.system.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.system.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.system.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.system.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.system.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.system.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.system.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.system.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.system.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.system.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.system.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.system.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.system.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.system.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.system.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.system.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.system.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.system.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.system.misc.state |
state |
kstat.zfs.system.multihost |
multihost |
kstat.zfs.system.reads |
reads |
kstat.zfs.system.txgs |
txgs |
kstat.zfs.tank |
|
kstat.zfs.tank.dataset |
|
kstat.zfs.tank.dataset.objset-[num] |
|
kstat.zfs.tank.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.tank.dataset.objset-[num].nread |
nread |
kstat.zfs.tank.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.tank.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.tank.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.tank.dataset.objset-[num].reads |
reads |
kstat.zfs.tank.dataset.objset-[num].writes |
writes |
kstat.zfs.tank.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.tank.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.tank.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.tank.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.tank.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.tank.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.tank.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.tank.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.tank.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.tank.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.tank.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.tank.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.tank.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.tank.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.tank.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.tank.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.tank.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.tank.misc |
|
kstat.zfs.tank.misc.dmu_tx_assign |
|
kstat.zfs.tank.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.tank.misc.guid |
guid |
kstat.zfs.tank.misc.iostats |
|
kstat.zfs.tank.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.tank.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.tank.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.tank.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.tank.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.tank.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.tank.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.tank.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.tank.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.tank.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.tank.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.tank.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.tank.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.tank.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.tank.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.tank.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.tank.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.tank.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.tank.misc.state |
state |
kstat.zfs.tank.multihost |
multihost |
kstat.zfs.tank.reads |
reads |
kstat.zfs.tank.txgs |
txgs |
kstat.zfs.zroot |
|
kstat.zfs.zroot.dataset |
|
kstat.zfs.zroot.dataset.objset-[num] |
|
kstat.zfs.zroot.dataset.objset-[num].dataset_name |
dataset_name |
kstat.zfs.zroot.dataset.objset-[num].nread |
nread |
kstat.zfs.zroot.dataset.objset-[num].nunlinked |
nunlinked |
kstat.zfs.zroot.dataset.objset-[num].nunlinks |
nunlinks |
kstat.zfs.zroot.dataset.objset-[num].nwritten |
nwritten |
kstat.zfs.zroot.dataset.objset-[num].reads |
reads |
kstat.zfs.zroot.dataset.objset-[num].writes |
writes |
kstat.zfs.zroot.dataset.objset-[num].zil_commit_count |
zil_commit_count |
kstat.zfs.zroot.dataset.objset-[num].zil_commit_error_count |
zil_commit_error_count |
kstat.zfs.zroot.dataset.objset-[num].zil_commit_stall_count |
zil_commit_stall_count |
kstat.zfs.zroot.dataset.objset-[num].zil_commit_suspend_count |
zil_commit_suspend_count |
kstat.zfs.zroot.dataset.objset-[num].zil_commit_writer_count |
zil_commit_writer_count |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_copied_bytes |
zil_itx_copied_bytes |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_copied_count |
zil_itx_copied_count |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_count |
zil_itx_count |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_indirect_bytes |
zil_itx_indirect_bytes |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_indirect_count |
zil_itx_indirect_count |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_metaslab_normal_alloc |
zil_itx_metaslab_normal_alloc |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_metaslab_normal_bytes |
zil_itx_metaslab_normal_bytes |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_metaslab_normal_count |
zil_itx_metaslab_normal_count |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_metaslab_normal_write |
zil_itx_metaslab_normal_write |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_metaslab_slog_alloc |
zil_itx_metaslab_slog_alloc |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_metaslab_slog_bytes |
zil_itx_metaslab_slog_bytes |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_metaslab_slog_count |
zil_itx_metaslab_slog_count |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_metaslab_slog_write |
zil_itx_metaslab_slog_write |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_needcopy_bytes |
zil_itx_needcopy_bytes |
kstat.zfs.zroot.dataset.objset-[num].zil_itx_needcopy_count |
zil_itx_needcopy_count |
kstat.zfs.zroot.misc |
|
kstat.zfs.zroot.misc.ddt_stats_blake3 |
|
kstat.zfs.zroot.misc.ddt_stats_blake3.log_active_entries |
log_active_entries |
kstat.zfs.zroot.misc.ddt_stats_blake3.log_flush_rate |
log_flush_rate |
kstat.zfs.zroot.misc.ddt_stats_blake3.log_flush_time_rate |
log_flush_time_rate |
kstat.zfs.zroot.misc.ddt_stats_blake3.log_flushing_entries |
log_flushing_entries |
kstat.zfs.zroot.misc.ddt_stats_blake3.log_ingest_rate |
log_ingest_rate |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup |
lookup |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_existing |
lookup_existing |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_live_hit |
lookup_live_hit |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_live_miss |
lookup_live_miss |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_live_wait |
lookup_live_wait |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_log_active_hit |
lookup_log_active_hit |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_log_flushing_hit |
lookup_log_flushing_hit |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_log_hit |
lookup_log_hit |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_log_miss |
lookup_log_miss |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_new |
lookup_new |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_stored_hit |
lookup_stored_hit |
kstat.zfs.zroot.misc.ddt_stats_blake3.lookup_stored_miss |
lookup_stored_miss |
kstat.zfs.zroot.misc.ddt_stats_edonr |
|
kstat.zfs.zroot.misc.ddt_stats_edonr.log_active_entries |
log_active_entries |
kstat.zfs.zroot.misc.ddt_stats_edonr.log_flush_rate |
log_flush_rate |
kstat.zfs.zroot.misc.ddt_stats_edonr.log_flush_time_rate |
log_flush_time_rate |
kstat.zfs.zroot.misc.ddt_stats_edonr.log_flushing_entries |
log_flushing_entries |
kstat.zfs.zroot.misc.ddt_stats_edonr.log_ingest_rate |
log_ingest_rate |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup |
lookup |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_existing |
lookup_existing |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_live_hit |
lookup_live_hit |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_live_miss |
lookup_live_miss |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_live_wait |
lookup_live_wait |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_log_active_hit |
lookup_log_active_hit |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_log_flushing_hit |
lookup_log_flushing_hit |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_log_hit |
lookup_log_hit |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_log_miss |
lookup_log_miss |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_new |
lookup_new |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_stored_hit |
lookup_stored_hit |
kstat.zfs.zroot.misc.ddt_stats_edonr.lookup_stored_miss |
lookup_stored_miss |
kstat.zfs.zroot.misc.ddt_stats_sha[num] |
|
kstat.zfs.zroot.misc.ddt_stats_sha[num].log_active_entries |
log_active_entries |
kstat.zfs.zroot.misc.ddt_stats_sha[num].log_flush_rate |
log_flush_rate |
kstat.zfs.zroot.misc.ddt_stats_sha[num].log_flush_time_rate |
log_flush_time_rate |
kstat.zfs.zroot.misc.ddt_stats_sha[num].log_flushing_entries |
log_flushing_entries |
kstat.zfs.zroot.misc.ddt_stats_sha[num].log_ingest_rate |
log_ingest_rate |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup |
lookup |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_existing |
lookup_existing |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_live_hit |
lookup_live_hit |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_live_miss |
lookup_live_miss |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_live_wait |
lookup_live_wait |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_log_active_hit |
lookup_log_active_hit |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_log_flushing_hit |
lookup_log_flushing_hit |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_log_hit |
lookup_log_hit |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_log_miss |
lookup_log_miss |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_new |
lookup_new |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_stored_hit |
lookup_stored_hit |
kstat.zfs.zroot.misc.ddt_stats_sha[num].lookup_stored_miss |
lookup_stored_miss |
kstat.zfs.zroot.misc.ddt_stats_skein |
|
kstat.zfs.zroot.misc.ddt_stats_skein.log_active_entries |
log_active_entries |
kstat.zfs.zroot.misc.ddt_stats_skein.log_flush_rate |
log_flush_rate |
kstat.zfs.zroot.misc.ddt_stats_skein.log_flush_time_rate |
log_flush_time_rate |
kstat.zfs.zroot.misc.ddt_stats_skein.log_flushing_entries |
log_flushing_entries |
kstat.zfs.zroot.misc.ddt_stats_skein.log_ingest_rate |
log_ingest_rate |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup |
lookup |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_existing |
lookup_existing |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_live_hit |
lookup_live_hit |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_live_miss |
lookup_live_miss |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_live_wait |
lookup_live_wait |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_log_active_hit |
lookup_log_active_hit |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_log_flushing_hit |
lookup_log_flushing_hit |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_log_hit |
lookup_log_hit |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_log_miss |
lookup_log_miss |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_new |
lookup_new |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_stored_hit |
lookup_stored_hit |
kstat.zfs.zroot.misc.ddt_stats_skein.lookup_stored_miss |
lookup_stored_miss |
kstat.zfs.zroot.misc.dmu_tx_assign |
|
kstat.zfs.zroot.misc.dmu_tx_assign.[num] ns |
1 ns |
kstat.zfs.zroot.misc.guid |
guid |
kstat.zfs.zroot.misc.iostats |
|
kstat.zfs.zroot.misc.iostats.arc_read_bytes |
arc_read_bytes |
kstat.zfs.zroot.misc.iostats.arc_read_count |
arc_read_count |
kstat.zfs.zroot.misc.iostats.arc_write_bytes |
arc_write_bytes |
kstat.zfs.zroot.misc.iostats.arc_write_count |
arc_write_count |
kstat.zfs.zroot.misc.iostats.autotrim_bytes_failed |
autotrim_bytes_failed |
kstat.zfs.zroot.misc.iostats.autotrim_bytes_skipped |
autotrim_bytes_skipped |
kstat.zfs.zroot.misc.iostats.autotrim_bytes_written |
autotrim_bytes_written |
kstat.zfs.zroot.misc.iostats.autotrim_extents_failed |
autotrim_extents_failed |
kstat.zfs.zroot.misc.iostats.autotrim_extents_skipped |
autotrim_extents_skipped |
kstat.zfs.zroot.misc.iostats.autotrim_extents_written |
autotrim_extents_written |
kstat.zfs.zroot.misc.iostats.direct_read_bytes |
direct_read_bytes |
kstat.zfs.zroot.misc.iostats.direct_read_count |
direct_read_count |
kstat.zfs.zroot.misc.iostats.direct_write_bytes |
direct_write_bytes |
kstat.zfs.zroot.misc.iostats.direct_write_count |
direct_write_count |
kstat.zfs.zroot.misc.iostats.simple_trim_bytes_failed |
simple_trim_bytes_failed |
kstat.zfs.zroot.misc.iostats.simple_trim_bytes_skipped |
simple_trim_bytes_skipped |
kstat.zfs.zroot.misc.iostats.simple_trim_bytes_written |
simple_trim_bytes_written |
kstat.zfs.zroot.misc.iostats.simple_trim_extents_failed |
simple_trim_extents_failed |
kstat.zfs.zroot.misc.iostats.simple_trim_extents_skipped |
simple_trim_extents_skipped |
kstat.zfs.zroot.misc.iostats.simple_trim_extents_written |
simple_trim_extents_written |
kstat.zfs.zroot.misc.iostats.trim_bytes_failed |
trim_bytes_failed |
kstat.zfs.zroot.misc.iostats.trim_bytes_skipped |
trim_bytes_skipped |
kstat.zfs.zroot.misc.iostats.trim_bytes_written |
trim_bytes_written |
kstat.zfs.zroot.misc.iostats.trim_extents_failed |
trim_extents_failed |
kstat.zfs.zroot.misc.iostats.trim_extents_skipped |
trim_extents_skipped |
kstat.zfs.zroot.misc.iostats.trim_extents_written |
trim_extents_written |
kstat.zfs.zroot.misc.state |
state |
kstat.zfs.zroot.multihost |
multihost |
kstat.zfs.zroot.reads |
reads |
kstat.zfs.zroot.txgs |
txgs |
machdep |
machine dependent |
machdep.acpi_root |
The physical address of the RSDP |
machdep.acpi_timer_freq |
ACPI timer frequency |
machdep.adjkerntz |
Local offset from UTC in seconds |
machdep.atrtc_power_lost |
RTC lost power on last power cycle (probably caused by an empty cmos battery) |
machdep.bootmethod |
System firmware boot method |
machdep.disable_msix_migration |
Disable migration of MSI-X interrupts between CPUs |
machdep.disable_mtrrs |
Disable MTRRs. |
machdep.disable_rtc_set |
Disallow adjusting time-of-day clock |
machdep.disable_tsc |
Disable x86 Time Stamp Counter |
machdep.disable_tsc_calibration |
Disable early TSC frequency calibration |
machdep.dump_retry_count |
Number of times dump has to retry before bailing out |
machdep.efi_arch |
EFI Firmware Architecture |
machdep.efi_map |
Raw EFI Memory Map |
machdep.efi_rt_handle_faults |
Call EFI RT methods with fault handler wrapper around |
machdep.enable_panic_key |
Enable panic via keypress specified in kbdmap(5) |
machdep.first_msi_irq |
Number of first IRQ reserved for MSI and MSI-X interrupts |
machdep.hwpstate_pkg_ctrl |
Set 1 (default) to enable package-level control, 0 to disable |
machdep.hyperthreading_allowed |
Use Intel HTT logical CPUs |
machdep.hyperthreading_intr_allowed |
Allow interrupts on HTT logical CPUs |
machdep.i8254_freq |
i8254 timer frequency |
machdep.idle |
currently selected idle function |
machdep.idle_apl31 |
Apollo Lake APL31 MWAIT bug workaround |
machdep.idle_available |
list of available idle functions |
machdep.idle_mwait |
Use MONITOR/MWAIT for short idle |
machdep.intr_apic_id_limit |
Maximum permitted APIC ID for interrupt delivery (-1 is unlimited) |
machdep.max_ldt_segment |
Maximum number of allowed LDT segments in the single address space |
machdep.mitigations |
Machine dependent platform mitigations. |
machdep.mitigations.flush_rsb_ctxsw |
Flush Return Stack Buffer on context switch |
machdep.mitigations.ibrs |
Indirect Branch Restricted Speculation active |
machdep.mitigations.ibrs.active |
Indirect Branch Restricted Speculation active |
machdep.mitigations.ibrs.disable |
Disable Indirect Branch Restricted Speculation |
machdep.mitigations.mds |
Microarchitectural Data Sampling Mitigation state |
machdep.mitigations.mds.disable |
Microarchitectural Data Sampling Mitigation (0 - off, 1 - on VERW, 2 - on SW, 3 - on AUTO) |
machdep.mitigations.mds.state |
Microarchitectural Data Sampling Mitigation state |
machdep.mitigations.rngds |
MCU Optimization, disable RDSEED mitigation |
machdep.mitigations.rngds.enable |
MCU Optimization, disabling RDSEED mitigation control (0 - mitigation disabled (RDSEED optimized), 1 - mitigation enabled) |
machdep.mitigations.rngds.state |
MCU Optimization state |
machdep.mitigations.ssb |
Speculative Store Bypass Disable active |
machdep.mitigations.ssb.active |
Speculative Store Bypass Disable active |
machdep.mitigations.ssb.disable |
Speculative Store Bypass Disable (0 - off, 1 - on, 2 - auto) |
machdep.mitigations.taa |
TSX Asynchronous Abort Mitigation |
machdep.mitigations.taa.enable |
TAA Mitigation enablement control (0 - off, 1 - disable TSX, 2 - VERW, 3 - on AUTO) |
machdep.mitigations.taa.state |
TAA Mitigation state |
machdep.mitigations.zenbleed |
Zenbleed OS-triggered prevention (via chicken bit) |
machdep.mitigations.zenbleed.enable |
Enable Zenbleed OS-triggered mitigation (chicken bit) (0: Force disable, 1: Force enable, 2: Automatic determination) |
machdep.mitigations.zenbleed.state |
Zenbleed OS-triggered mitigation (chicken bit) state |
machdep.mwait_cpustop_broken |
Can not reliably wake MONITOR/MWAIT cpus without interrupts |
machdep.nkpt |
Number of kernel page table pages allocated on bootup |
machdep.nmi_flush_l1d_sw |
Flush L1 Data Cache on NMI exit, software bhyve L1TF mitigation assist |
machdep.nmi_is_broadcast |
Chipset NMI is broadcast |
machdep.num_msi_irqs |
Number of IRQs reserved for MSI and MSI-X interrupts |
machdep.panic_on_nmi |
Panic on NMI: 1 = H/W failure; 2 = unknown; 0xff = all |
machdep.prot_fault_translation |
Control signal to deliver on protection fault |
machdep.rtc_save_period |
Save system time to RTC with this period (in seconds) |
machdep.smap |
Raw BIOS SMAP data |
machdep.stop_mwait |
Use MONITOR/MWAIT when stopping CPU, if available |
machdep.syscall_ret_flush_l1d |
Flush L1D on syscall return with error (0 - off, 1 - on, 2 - use hw only, 3 - use sw only) |
machdep.tsc_freq |
Time Stamp Counter frequency |
machdep.uprintf_signal |
Print debugging information on trap signal to ctty |
machdep.vga_aspect_scale |
Aspect scale ratio (3:4):actual times 100 |
machdep.wall_cmos_clock |
Enables application of machdep.adjkerntz |
net |
Network, (see socket.h) |
net.accf |
Accept filters |
net.accf.unloadable |
Allow unload of accept filters (not recommended) |
net.add_addr_allfibs |
|
net.bluetooth |
Bluetooth family |
net.bluetooth.hci |
Bluetooth HCI family |
net.bluetooth.hci.command_timeout |
HCI command timeout (sec) |
net.bluetooth.hci.connection_timeout |
HCI connect timeout (sec) |
net.bluetooth.hci.max_neighbor_age |
Maximal HCI neighbor cache entry age (sec) |
net.bluetooth.hci.sockets |
Bluetooth HCI sockets family |
net.bluetooth.hci.sockets.raw |
Bluetooth raw HCI sockets family |
net.bluetooth.hci.sockets.raw.debug_level |
Bluetooth raw HCI sockets debug level |
net.bluetooth.hci.sockets.raw.ioctl_timeout |
Bluetooth raw HCI sockets ioctl timeout |
net.bluetooth.hci.sockets.raw.queue_drops |
Bluetooth raw HCI sockets input queue drops |
net.bluetooth.hci.sockets.raw.queue_len |
Bluetooth raw HCI sockets input queue length |
net.bluetooth.hci.sockets.raw.queue_maxlen |
Bluetooth raw HCI sockets input queue max. length |
net.bluetooth.l2cap |
Bluetooth L2CAP family |
net.bluetooth.l2cap.ertx_timeout |
L2CAP ERTX timeout (sec) |
net.bluetooth.l2cap.rtx_timeout |
L2CAP RTX timeout (sec) |
net.bluetooth.l2cap.sockets |
Bluetooth L2CAP sockets family |
net.bluetooth.l2cap.sockets.raw |
Bluetooth raw L2CAP sockets family |
net.bluetooth.l2cap.sockets.raw.debug_level |
Bluetooth raw L2CAP sockets debug level |
net.bluetooth.l2cap.sockets.raw.ioctl_timeout |
Bluetooth raw L2CAP sockets ioctl timeout |
net.bluetooth.l2cap.sockets.raw.queue_drops |
Bluetooth raw L2CAP sockets input queue drops |
net.bluetooth.l2cap.sockets.raw.queue_len |
Bluetooth raw L2CAP sockets input queue length |
net.bluetooth.l2cap.sockets.raw.queue_maxlen |
Bluetooth raw L2CAP sockets input queue max. length |
net.bluetooth.l2cap.sockets.seq |
Bluetooth SEQPACKET L2CAP sockets family |
net.bluetooth.l2cap.sockets.seq.debug_level |
Bluetooth SEQPACKET L2CAP sockets debug level |
net.bluetooth.l2cap.sockets.seq.queue_drops |
Bluetooth SEQPACKET L2CAP sockets input queue drops |
net.bluetooth.l2cap.sockets.seq.queue_len |
Bluetooth SEQPACKET L2CAP sockets input queue length |
net.bluetooth.l2cap.sockets.seq.queue_maxlen |
Bluetooth SEQPACKET L2CAP sockets input queue max. length |
net.bluetooth.rfcomm |
Bluetooth RFCOMM family |
net.bluetooth.rfcomm.sockets |
Bluetooth RFCOMM sockets family |
net.bluetooth.rfcomm.sockets.stream |
Bluetooth STREAM RFCOMM sockets family |
net.bluetooth.rfcomm.sockets.stream.debug_level |
Bluetooth STREAM RFCOMM sockets debug level |
net.bluetooth.rfcomm.sockets.stream.timeout |
Bluetooth STREAM RFCOMM sockets timeout |
net.bluetooth.sco |
Bluetooth SCO family |
net.bluetooth.sco.rtx_timeout |
SCO RTX timeout (sec) |
net.bluetooth.sco.sockets |
Bluetooth SCO sockets family |
net.bluetooth.sco.sockets.seq |
Bluetooth SEQPACKET SCO sockets family |
net.bluetooth.sco.sockets.seq.debug_level |
Bluetooth SEQPACKET SCO sockets debug level |
net.bluetooth.sco.sockets.seq.queue_drops |
Bluetooth SEQPACKET SCO sockets input queue drops |
net.bluetooth.sco.sockets.seq.queue_len |
Bluetooth SEQPACKET SCO sockets input queue length |
net.bluetooth.sco.sockets.seq.queue_maxlen |
Bluetooth SEQPACKET SCO sockets input queue max. length |
net.bluetooth.usb_isoc_enable |
enable isochronous transfers |
net.bluetooth.version |
Version of the stack |
net.bpf |
bpf sysctl |
net.bpf.bufsize |
Default capture buffer size in bytes |
net.bpf.maxbufsize |
Maximum capture buffer in bytes |
net.bpf.maxinsns |
Maximum bpf program instructions |
net.bpf.optimize_writers |
Do not send packets until BPF program is set |
net.bpf.stats |
bpf statistics portal |
net.bpf.zerocopy_enable |
Enable new zero-copy BPF buffer sessions |
net.debugnet |
debugnet parameters |
net.debugnet.arp_nretries |
Number of ARP attempts before giving up |
net.debugnet.debug |
Debug message verbosity (0: off; 1: on; 2: verbose) |
net.debugnet.fib |
Fib to use when sending dump |
net.debugnet.npolls |
Number of times to poll before assuming packet loss (0.5ms per poll) |
net.debugnet.nretries |
Number of retransmit attempts before giving up |
net.fibs |
set number of fibs |
net.graph |
netgraph Family |
net.graph.abi_version |
|
net.graph.control |
CONTROL |
net.graph.control.proto |
|
net.graph.data |
DATA |
net.graph.data.proto |
|
net.graph.family |
|
net.graph.maxalloc |
Maximum number of non-data queue items to allocate |
net.graph.maxdata |
Maximum number of data queue items to allocate |
net.graph.maxdgram |
Maximum outgoing Netgraph datagram size |
net.graph.msg_version |
|
net.graph.recvspace |
Maximum space for incoming Netgraph datagrams |
net.graph.threads |
Number of queue processing threads |
net.hvsock |
HyperV socket |
net.hvsock.hvs_dbg_level |
hyperv socket debug level: 0 = none, 1 = info, 2 = error, 3 = verbose |
net.ifdescr_maxlen |
administrative maximum length for interface description |
net.iflib |
iflib driver parameters |
net.iflib.encap_load_mbuf_fail |
# busdma load failures |
net.iflib.encap_pad_mbuf_fail |
# runt frame pad failures |
net.iflib.encap_txd_encap_fail |
# driver encap failures |
net.iflib.encap_txq_avail_fail |
# txq avail failures |
net.iflib.fast_intrs |
# fast_intr calls |
net.iflib.fl_refills |
# refills |
net.iflib.fl_refills_large |
# large refills |
net.iflib.min_tx_latency |
minimize transmit latency at the possible expense of throughput |
net.iflib.no_tx_batch |
minimize transmit latency at the possible expense of throughput |
net.iflib.rx_allocs |
# RX allocations |
net.iflib.rx_ctx_inactive |
# times rxeof called with inactive context |
net.iflib.rx_if_input |
# times rxeof called if_input |
net.iflib.rx_intr_enables |
# RX intr enables |
net.iflib.rx_unavail |
# times rxeof called with no available data |
net.iflib.rxd_flush |
# times rxd_flush called |
net.iflib.task_fn_rx |
# task_fn_rx calls |
net.iflib.timer_default |
number of ticks between iflib_timer calls |
net.iflib.tx_encap |
# TX mbufs encapped |
net.iflib.tx_frees |
# TX frees |
net.iflib.tx_seen |
# TX mbufs seen |
net.iflib.tx_sent |
# TX mbufs sent |
net.iflib.txq_drain_flushing |
# drain flushes |
net.iflib.txq_drain_notready |
# drain notready |
net.iflib.txq_drain_oactive |
# drain oactives |
net.iflib.verbose_debug |
enable verbose debugging |
net.inet |
Internet Family |
net.inet.accf |
Accept filters |
net.inet.accf.http |
HTTP accept filter |
net.inet.accf.http.parsehttpversion |
Parse http version so that non 1.x requests work |
net.inet.ah |
AH |
net.inet.esp |
ESP |
net.inet.icmp |
ICMP |
net.inet.icmp.bmcastecho |
Reply to multicast ICMP Echo Request and Timestamp packets |
net.inet.icmp.drop_redirect |
Ignore ICMP redirects |
net.inet.icmp.error_keeptags |
ICMP error response keeps copy of mbuf_tags of original packet |
net.inet.icmp.icmplim |
Maximum number of ICMP responses per second |
net.inet.icmp.icmplim_jitter |
Random icmplim jitter adjustment limit |
net.inet.icmp.icmplim_output |
Enable logging of ICMP response rate limiting |
net.inet.icmp.log_redirect |
Log ICMP redirects to the console |
net.inet.icmp.maskfake |
Fake reply to ICMP Address Mask Request packets |
net.inet.icmp.maskrepl |
Reply to ICMP Address Mask Request packets |
net.inet.icmp.quotelen |
Number of bytes from original packet to quote in ICMP reply |
net.inet.icmp.redirtimeout |
Delay in seconds before expiring redirect route |
net.inet.icmp.reply_from_interface |
ICMP reply from incoming interface for non-local packets |
net.inet.icmp.reply_src |
ICMP reply source for non-local packets |
net.inet.icmp.stats |
ICMP statistics (struct icmpstat, netinet/icmp_var.h) |
net.inet.icmp.tstamprepl |
Respond to ICMP Timestamp packets |
net.inet.igmp |
IGMP |
net.inet.igmp.default_version |
Default version of IGMP to run on each interface |
net.inet.igmp.gsrdelay |
Rate limit for IGMPv3 Group-and-Source queries in seconds |
net.inet.igmp.ifinfo |
Per-interface IGMPv3 state |
net.inet.igmp.legacysupp |
Allow v1/v2 reports to suppress v3 group responses |
net.inet.igmp.recvifkludge |
Rewrite IGMPv1/v2 reports from 0.0.0.0 to contain subnet address |
net.inet.igmp.sendlocal |
Send IGMP membership reports for 224.0.0.0/24 groups |
net.inet.igmp.sendra |
Send IP Router Alert option in IGMPv2/v3 messages |
net.inet.igmp.stats |
IGMP statistics (struct igmpstat, netinet/igmp_var.h) |
net.inet.igmp.v1enable |
Enable backwards compatibility with IGMPv1 |
net.inet.igmp.v2enable |
Enable backwards compatibility with IGMPv2 |
net.inet.ip |
IP |
net.inet.ip.accept_sourceroute |
Enable accepting source routed IP packets |
net.inet.ip.allow_net0 |
Allow forwarding of and ICMP response to addresses in network 0/8 |
net.inet.ip.allow_net240 |
Allow forwarding of and ICMP response to Experimental addresses, aka Class E (240/4) |
net.inet.ip.broadcast_lowest |
Treat lowest address on a subnet (host 0) as broadcast |
net.inet.ip.connect_inaddr_wild |
Allow connecting to INADDR_ANY or INADDR_BROADCAST for connect(2) |
net.inet.ip.curfrags |
Current number of IPv4 fragments across all reassembly queues |
net.inet.ip.forwarding |
Enable IP forwarding between interfaces |
net.inet.ip.fragpackets |
Current number of IPv4 fragment reassembly queue entries |
net.inet.ip.fragttl |
IP fragment life time on reassembly queue (seconds) |
net.inet.ip.gifttl |
Default TTL value for encapsulated packets |
net.inet.ip.intr_queue_drops |
Number of packets dropped from the IP input queue |
net.inet.ip.intr_queue_maxlen |
Maximum size of the IP input queue |
net.inet.ip.loopback_prefixlen |
Prefix length of address space reserved for loopback |
net.inet.ip.maxfragbucketsize |
Maximum number of IPv4 fragment reassembly queue entries per bucket |
net.inet.ip.maxfragpackets |
Maximum number of IPv4 fragment reassembly queue entries |
net.inet.ip.maxfrags |
Maximum number of IPv4 fragments allowed across all reassembly queues |
net.inet.ip.maxfragsperpacket |
Maximum number of IPv4 fragments allowed per packet |
net.inet.ip.mcast |
IPv4 multicast |
net.inet.ip.mcast.filters |
Per-interface stack-wide source filters |
net.inet.ip.mcast.loop |
Loopback multicast datagrams by default |
net.inet.ip.mcast.maxgrpsrc |
Max source filters per group |
net.inet.ip.mcast.maxsocksrc |
Max source filters per socket |
net.inet.ip.no_same_prefix |
Refuse to create same prefixes on different interfaces |
net.inet.ip.portrange |
IP Ports |
net.inet.ip.portrange.first |
|
net.inet.ip.portrange.hifirst |
|
net.inet.ip.portrange.hilast |
|
net.inet.ip.portrange.last |
|
net.inet.ip.portrange.lowfirst |
|
net.inet.ip.portrange.lowlast |
|
net.inet.ip.portrange.randomcps |
Maximum number of random port allocations before switching to a sequential one |
net.inet.ip.portrange.randomized |
Enable random port allocation |
net.inet.ip.portrange.randomtime |
Minimum time to keep sequential port allocation before switching to a random one |
net.inet.ip.portrange.reservedhigh |
|
net.inet.ip.portrange.reservedlow |
|
net.inet.ip.process_options |
Enable IP options processing ([LS]SRR, RR, TS) |
net.inet.ip.random_id |
Assign random ip_id values |
net.inet.ip.random_id_collisions |
Count of IP ID collisions |
net.inet.ip.random_id_period |
IP ID Array size |
net.inet.ip.random_id_total |
Count of IP IDs created |
net.inet.ip.reass_hashsize |
Size of IP fragment reassembly hashtable |
net.inet.ip.redirect |
Enable sending IP redirects |
net.inet.ip.rfc1122_strong_es |
Packet's IP destination address must match address on arrival interface |
net.inet.ip.rfc6864 |
Use constant IP ID for atomic datagrams |
net.inet.ip.source_address_validation |
Drop incoming packets with source address that is a local address |
net.inet.ip.sourceroute |
Enable forwarding source routed IP packets |
net.inet.ip.stats |
IP statistics (struct ipstat, netinet/ip_var.h) |
net.inet.ip.ttl |
Maximum TTL on IP packets |
net.inet.ipcomp |
IPCOMP |
net.inet.ipip |
IPIP |
net.inet.ipsec |
IPSEC |
net.inet.ipsec.debug |
Enable IPsec debugging output when set. |
net.inet.raw |
RAW |
net.inet.raw.bind_all_fibs |
Bound sockets receive traffic from all FIBs |
net.inet.raw.maxdgram |
Maximum outgoing raw IP datagram size |
net.inet.raw.pcblist |
List of active raw IP sockets |
net.inet.raw.recvspace |
Maximum space for incoming raw IP datagrams |
net.inet.sctp |
SCTP |
net.inet.tcp |
TCP |
net.inet.tcp.abc_l_var |
Cap the max cwnd increment during slow-start to this number of segments |
net.inet.tcp.ack_war_cnt |
Maximum number of challenge ACKs sent per TCP connection during the time interval (ack_war_timewindow) |
net.inet.tcp.ack_war_timewindow |
Time interval in ms used to limit the number (ack_war_cnt) of challenge ACKs sent per TCP connection |
net.inet.tcp.always_keepalive |
Assume SO_KEEPALIVE on all TCP connections |
net.inet.tcp.bb |
TCP Black Box controls |
net.inet.tcp.bb.disable_all |
Disable all BB logging for all connections |
net.inet.tcp.bb.log_auto_all |
Auto-select from all sessions (rather than just those with IDs) |
net.inet.tcp.bb.log_auto_mode |
Logging mode for auto-selected sessions (default is TCP_LOG_STATE_TAIL) |
net.inet.tcp.bb.log_auto_ratio |
Do auto capturing for 1 out of N sessions |
net.inet.tcp.bb.log_global_entries |
Current number of events maintained for all TCP sessions |
net.inet.tcp.bb.log_global_limit |
Maximum number of events maintained for all TCP sessions |
net.inet.tcp.bb.log_id_entries |
Current number of log IDs |
net.inet.tcp.bb.log_id_limit |
Maximum number of log IDs |
net.inet.tcp.bb.log_id_tcpcb_entries |
Current number of tcpcbs with log IDs |
net.inet.tcp.bb.log_id_tcpcb_limit |
Maximum number of tcpcbs with log IDs |
net.inet.tcp.bb.log_session_limit |
Maximum number of events maintained for each TCP session |
net.inet.tcp.bb.log_verbose |
Force verbose logging for TCP traces |
net.inet.tcp.bb.log_version |
Version of log formats exported |
net.inet.tcp.bb.pcb_ids_cur |
Number of pcb IDs allocated in the system |
net.inet.tcp.bb.pcb_ids_tot |
Total number of pcb IDs that have been allocated |
net.inet.tcp.bb.tp |
TCP Black Box Trace Point controls |
net.inet.tcp.bb.tp.bbmode |
What is BB logging mode that is activated? |
net.inet.tcp.bb.tp.count |
How many connections will have BB logging turned on that hit the tracepoint? |
net.inet.tcp.bb.tp.number |
What is the trace point number to activate (0=none, 0xffffffff = all)? |
net.inet.tcp.bind_all_fibs |
Bound sockets receive traffic from all FIBs |
net.inet.tcp.blackhole |
Do not send RST on segments to closed ports |
net.inet.tcp.blackhole_local |
Enforce net.inet.tcp.blackhole for locally originated packets |
net.inet.tcp.cc |
Congestion control related settings |
net.inet.tcp.cc.abe |
Enable draft-ietf-tcpm-alternativebackoff-ecn (TCP Alternative Backoff with ECN) |
net.inet.tcp.cc.abe_frlossreduce |
Apply standard beta instead of ABE-beta during ECN-signalled congestion recovery episodes if loss also needs to be repaired |
net.inet.tcp.cc.algorithm |
Default congestion control algorithm |
net.inet.tcp.cc.available |
List available congestion control algorithms |
net.inet.tcp.cc.hystartplusplus |
New Reno related HyStart++ settings |
net.inet.tcp.cc.hystartplusplus.bblogs |
Do we enable HyStart++ Black Box logs to be generated if BB logging is on |
net.inet.tcp.cc.hystartplusplus.css_growth_div |
The divisor to the growth when in Hystart++ CSS |
net.inet.tcp.cc.hystartplusplus.css_rounds |
The number of rounds HyStart++ lasts in CSS before falling to CA |
net.inet.tcp.cc.hystartplusplus.maxrtt_thresh |
HyStarts++ maximum RTT thresh used in clamp (in microseconds) |
net.inet.tcp.cc.hystartplusplus.minrtt_thresh |
HyStarts++ minimum RTT thresh used in clamp (in microseconds) |
net.inet.tcp.cc.hystartplusplus.n_rttsamples |
The number of RTT samples that must be seen to consider HyStart++ |
net.inet.tcp.cc.newreno |
New Reno related settings |
net.inet.tcp.cc.newreno.beta |
New Reno beta, specified as number between 1 and 100 |
net.inet.tcp.cc.newreno.beta_ecn |
New Reno beta ecn, specified as number between 1 and 100 |
net.inet.tcp.delacktime |
Time before a delayed ACK is sent |
net.inet.tcp.delayed_ack |
Delay ACK to try and piggyback it onto a data packet |
net.inet.tcp.dgp_failures |
Number of times we failed to enable dgp to avoid exceeding the limit |
net.inet.tcp.dgp_limit |
If the TCP stack does DGP, is there a limit (-1 = no, 0 = no dgp N = number of connections) |
net.inet.tcp.do_lrd |
Perform Lost Retransmission Detection |
net.inet.tcp.do_prr |
Enable Proportional Rate Reduction per RFC 6937 |
net.inet.tcp.do_prr_conservative |
Do conservative Proportional Rate Reduction |
net.inet.tcp.do_tcpdrain |
Enable tcp_drain routine for extra help when low on mbufs |
net.inet.tcp.drop |
Drop TCP connection |
net.inet.tcp.drop_synfin |
Drop TCP packets with SYN+FIN set |
net.inet.tcp.ecn |
TCP ECN |
net.inet.tcp.ecn.enable |
TCP ECN support |
net.inet.tcp.ecn.maxretries |
Max retries before giving up on ECN |
net.inet.tcp.fast_finwait2_recycle |
Recycle closed FIN_WAIT_2 connections faster |
net.inet.tcp.fastopen |
TCP Fast Open |
net.inet.tcp.fastopen.acceptany |
Accept any non-empty cookie |
net.inet.tcp.fastopen.autokey |
Number of seconds between auto-generation of a new key; zero disables |
net.inet.tcp.fastopen.ccache_bucket_limit |
Max entries per bucket in client cookie cache |
net.inet.tcp.fastopen.ccache_buckets |
Client cookie cache number of buckets (power of 2) |
net.inet.tcp.fastopen.ccache_list |
List of all client cookie cache entries |
net.inet.tcp.fastopen.client_enable |
Enable/disable TCP Fast Open client functionality |
net.inet.tcp.fastopen.keylen |
Key length in bytes |
net.inet.tcp.fastopen.maxkeys |
Maximum number of keys supported |
net.inet.tcp.fastopen.maxpsks |
Maximum number of pre-shared keys supported |
net.inet.tcp.fastopen.numkeys |
Number of keys installed |
net.inet.tcp.fastopen.numpsks |
Number of pre-shared keys installed |
net.inet.tcp.fastopen.path_disable_time |
Seconds a TFO failure disables a {client_ip, server_ip, server_port} path |
net.inet.tcp.fastopen.psk_enable |
Enable/disable TCP Fast Open server pre-shared key mode |
net.inet.tcp.fastopen.server_enable |
Enable/disable TCP Fast Open server functionality |
net.inet.tcp.fastopen.setkey |
Install a new key |
net.inet.tcp.fastopen.setpsk |
Install a new pre-shared key |
net.inet.tcp.finwait2_timeout |
FIN-WAIT2 timeout |
net.inet.tcp.function_info |
List TCP function block name-to-ID mappings |
net.inet.tcp.functions_available |
list available TCP Function sets |
net.inet.tcp.functions_default |
Set/get the default TCP functions |
net.inet.tcp.functions_inherit_listen_socket_stack |
Inherit listen socket's stack |
net.inet.tcp.getcred |
Get the xucred of a TCP connection |
net.inet.tcp.hostcache |
TCP Host cache |
net.inet.tcp.hostcache.bucketlimit |
Per-bucket hash limit for hostcache |
net.inet.tcp.hostcache.cachelimit |
Overall entry limit for hostcache |
net.inet.tcp.hostcache.count |
Current number of entries in hostcache |
net.inet.tcp.hostcache.enable |
Enable the TCP hostcache |
net.inet.tcp.hostcache.expire |
Expire time of TCP hostcache entries |
net.inet.tcp.hostcache.hashsize |
Size of TCP hostcache hashtable |
net.inet.tcp.hostcache.histo |
Print a histogram of hostcache hashbucket utilization |
net.inet.tcp.hostcache.list |
List of all hostcache entries |
net.inet.tcp.hostcache.prune |
Time between purge runs |
net.inet.tcp.hostcache.purge |
Expire all entries on next purge run |
net.inet.tcp.hostcache.purgenow |
Immediately purge all entries |
net.inet.tcp.icmp_may_rst |
Certain ICMP unreachable messages may abort connections in SYN_SENT |
net.inet.tcp.initcwnd_segments |
Slow-start flight size (initial congestion window) in number of segments |
net.inet.tcp.insecure_ack |
Follow RFC793 criteria for validating SEG.ACK |
net.inet.tcp.insecure_rst |
Follow RFC793 instead of RFC5961 criteria for accepting RST packets |
net.inet.tcp.insecure_syn |
Follow RFC793 instead of RFC5961 criteria for accepting SYN packets |
net.inet.tcp.isn_reseed_interval |
Seconds between reseeding of ISN secret |
net.inet.tcp.keepcnt |
Number of keepalive probes to send |
net.inet.tcp.keepidle |
time before keepalive probes begin |
net.inet.tcp.keepinit |
time to establish connection |
net.inet.tcp.keepintvl |
time between keepalive probes |
net.inet.tcp.log_debug |
Log errors caused by incoming TCP segments |
net.inet.tcp.log_in_vain |
Log all incoming TCP segments to closed ports |
net.inet.tcp.lro |
TCP LRO |
net.inet.tcp.lro.compressed |
Number of lro's compressed and sent to transport |
net.inet.tcp.lro.entries |
default number of LRO entries |
net.inet.tcp.lro.extra_mbuf |
Number of times we had an extra compressed ack dropped into the tp |
net.inet.tcp.lro.fullqueue |
Number of lro's fully queued to transport |
net.inet.tcp.lro.lockcnt |
Number of lro's inp_wlocks taken |
net.inet.tcp.lro.lro_badcsum |
Number of packets that the common code saw with bad csums |
net.inet.tcp.lro.lro_cpu_threshold |
Number of interrupts in a row on the same CPU that will make us declare an 'affinity' cpu? |
net.inet.tcp.lro.lro_less_accurate |
Do we trade off efficency by doing less timestamp operations for time accuracy? |
net.inet.tcp.lro.with_m_ackcmp |
Number of mbufs queued with M_ACKCMP flags set |
net.inet.tcp.lro.without_m_ackcmp |
Number of mbufs queued without M_ACKCMP |
net.inet.tcp.lro.wokeup |
Number of lro's where we woke up transport via hpts |
net.inet.tcp.lro.would_have_but |
Number of times we would have had an extra compressed, but mget failed |
net.inet.tcp.map_limit |
Total sendmap entries limit |
net.inet.tcp.maxtcptw |
Maximum number of compressed TCP TIME_WAIT entries |
net.inet.tcp.maxunacktime |
Maximum time (in ms) that a session can linger without making progress |
net.inet.tcp.minmss |
Minimum TCP Maximum Segment Size |
net.inet.tcp.msl |
Maximum segment lifetime |
net.inet.tcp.mssdflt |
Default TCP Maximum Segment Size |
net.inet.tcp.newcwv |
Enable New Congestion Window Validation per RFC7661 |
net.inet.tcp.nolocaltimewait |
Do not create TCP TIME_WAIT state for local connections |
net.inet.tcp.pacing_count |
Number of TCP connections being paced |
net.inet.tcp.pacing_failures |
Number of times we failed to enable pacing to avoid exceeding the limit |
net.inet.tcp.pacing_limit |
If the TCP stack does pacing, is there a limit (-1 = no, 0 = no pacing N = number of connections) |
net.inet.tcp.path_mtu_discovery |
Enable Path MTU Discovery |
net.inet.tcp.pcbcount |
Number of active PCBs |
net.inet.tcp.pcblist |
List of active TCP connections |
net.inet.tcp.per_cpu_timers |
run tcp timers on all cpus |
net.inet.tcp.persmax |
maximum persistence interval |
net.inet.tcp.persmin |
minimum persistence interval |
net.inet.tcp.pmtud_blackhole_detection |
Path MTU Discovery Black Hole Detection Enabled |
net.inet.tcp.pmtud_blackhole_mss |
Path MTU Discovery Black Hole Detection lowered MSS |
net.inet.tcp.reass |
TCP Segment Reassembly Queue |
net.inet.tcp.reass.cursegments |
Global number of TCP Segments currently in Reassembly Queue |
net.inet.tcp.reass.maxqueuelen |
Maximum number of TCP Segments per Reassembly Queue |
net.inet.tcp.reass.maxsegments |
Global maximum number of TCP Segments in Reassembly Queue |
net.inet.tcp.reass.new_limit |
Do we use the new limit method we are discussing? |
net.inet.tcp.reass.queueguard |
Number of TCP Segments in Reassembly Queue where we flip over to guard mode |
net.inet.tcp.reass.stats |
TCP Segment Reassembly stats |
net.inet.tcp.recvbuf_auto |
Enable automatic receive buffer sizing |
net.inet.tcp.recvbuf_max |
Max size of automatic receive buffer |
net.inet.tcp.recvspace |
Initial receive socket buffer size |
net.inet.tcp.require_unique_port |
Require globally-unique ephemeral port for outgoing connections |
net.inet.tcp.retries |
maximum number of consecutive timer based retransmissions |
net.inet.tcp.rexmit_drop_options |
Drop TCP options from 3rd and later retransmitted SYN |
net.inet.tcp.rexmit_initial |
Initial Retransmission Timeout |
net.inet.tcp.rexmit_min |
Minimum Retransmission Timeout |
net.inet.tcp.rexmit_slop |
Retransmission Timer Slop |
net.inet.tcp.rfc1323 |
Enable rfc1323 (high performance TCP) extensions |
net.inet.tcp.rfc3042 |
Enable RFC 3042 (Limited Transmit) |
net.inet.tcp.rfc3390 |
Enable RFC 3390 (Increasing TCP's Initial Congestion Window) |
net.inet.tcp.rfc3465 |
Enable RFC 3465 (Appropriate Byte Counting) |
net.inet.tcp.sack |
TCP SACK |
net.inet.tcp.sack.enable |
Enable/Disable TCP SACK support |
net.inet.tcp.sack.globalholes |
Global number of TCP SACK holes currently allocated |
net.inet.tcp.sack.globalmaxholes |
Global maximum number of TCP SACK holes |
net.inet.tcp.sack.lrd |
Perform Lost Retransmission Detection |
net.inet.tcp.sack.maxholes |
Maximum number of TCP SACK holes allowed per connection |
net.inet.tcp.sack.revised |
Use revised SACK loss recovery per RFC 6675 |
net.inet.tcp.sack.tso |
Allow TSO during SACK loss recovery |
net.inet.tcp.sendbuf_auto |
Enable automatic send buffer sizing |
net.inet.tcp.sendbuf_auto_lowat |
Modify threshold for auto send buffer growth to account for SO_SNDLOWAT |
net.inet.tcp.sendbuf_inc |
Incrementor step size of automatic send buffer |
net.inet.tcp.sendbuf_max |
Max size of automatic send buffer |
net.inet.tcp.sendspace |
Initial send socket buffer size |
net.inet.tcp.setsockopt |
Set socket option for TCP endpoint |
net.inet.tcp.soreceive_stream |
Using soreceive_stream for TCP sockets |
net.inet.tcp.split_limit |
Total sendmap split entries limit |
net.inet.tcp.states |
TCP connection counts by TCP state |
net.inet.tcp.stats |
TCP statistics (struct tcpstat, netinet/tcp_var.h) |
net.inet.tcp.switch_to_ifnet_tls |
Switch TCP connection to ifnet TLS |
net.inet.tcp.switch_to_sw_tls |
Switch TCP connection to SW TLS |
net.inet.tcp.syncache |
TCP SYN cache |
net.inet.tcp.syncache.bucketlimit |
Per-bucket hash limit for syncache |
net.inet.tcp.syncache.cachelimit |
Overall entry limit for syncache |
net.inet.tcp.syncache.count |
Current number of entries in syncache |
net.inet.tcp.syncache.hashsize |
Size of TCP syncache hashtable |
net.inet.tcp.syncache.rexmtlimit |
Limit on SYN/ACK retransmissions |
net.inet.tcp.syncache.rst_on_sock_fail |
Send reset on socket allocation failure |
net.inet.tcp.syncache.see_other |
All syncache(4) entries are visible, ignoring UID/GID, jail(2) and mac(4) checks |
net.inet.tcp.syncookies |
Use TCP SYN cookies if the syncache overflows |
net.inet.tcp.syncookies_only |
Use only TCP SYN cookies |
net.inet.tcp.tcbhashsize |
Size of TCP control-block hashtable |
net.inet.tcp.tolerate_missing_ts |
Tolerate missing TCP timestamps |
net.inet.tcp.ts_offset_per_conn |
Initialize TCP timestamps per connection instead of per host pair |
net.inet.tcp.tso |
Enable TCP Segmentation Offload |
net.inet.tcp.udp_tunneling_overhead |
MSS reduction when using tcp over udp |
net.inet.tcp.udp_tunneling_port |
Tunneling port for tcp over udp |
net.inet.tcp.v6mssdflt |
Default TCP Maximum Segment Size for IPv6 |
net.inet.tcp.v6pmtud_blackhole_mss |
Path MTU Discovery IPv6 Black Hole Detection lowered MSS |
net.inet.udp |
UDP |
net.inet.udp.bind_all_fibs |
Bound sockets receive traffic from all FIBs |
net.inet.udp.blackhole |
Do not send port unreachables for refused connects |
net.inet.udp.blackhole_local |
Enforce net.inet.udp.blackhole for locally originated packets |
net.inet.udp.checksum |
compute udp checksum |
net.inet.udp.getcred |
Get the xucred of a UDP connection |
net.inet.udp.log_in_vain |
Log all incoming UDP packets |
net.inet.udp.maxdgram |
Maximum outgoing UDP datagram size |
net.inet.udp.pcblist |
List of active UDP sockets |
net.inet.udp.recvspace |
Maximum space for incoming UDP datagrams |
net.inet.udp.stats |
UDP statistics (struct udpstat, netinet/udp_var.h) |
net.inet6 |
Internet6 Family |
net.inet6.icmp6 |
ICMP6 |
net.inet6.icmp6.errppslimit |
Maximum number of ICMPv6 error/reply messages per second |
net.inet6.icmp6.icmp6lim_jitter |
Random errppslimit jitter adjustment limit |
net.inet6.icmp6.icmp6lim_output |
Enable logging of ICMPv6 response rate limiting |
net.inet6.icmp6.nd6_debug |
Log NDP debug messages |
net.inet6.icmp6.nd6_delay |
Delay in seconds before probing for reachability |
net.inet6.icmp6.nd6_drlist |
NDP default router list |
net.inet6.icmp6.nd6_gctimer |
|
net.inet6.icmp6.nd6_maxnudhint |
|
net.inet6.icmp6.nd6_maxqueuelen |
|
net.inet6.icmp6.nd6_mmaxtries |
Number of ICMPv6 NS messages sent during address resolution |
net.inet6.icmp6.nd6_onlink_ns_rfc4861 |
Accept 'on-link' ICMPv6 NS messages in compliance with RFC 4861 |
net.inet6.icmp6.nd6_prlist |
NDP prefix list |
net.inet6.icmp6.nd6_prune |
Frequency in seconds of checks for expired prefixes and routers |
net.inet6.icmp6.nd6_umaxtries |
Number of ICMPv6 NS messages sent during reachability detection |
net.inet6.icmp6.nd6_useloopback |
Create a loopback route when configuring an IPv6 address |
net.inet6.icmp6.nodeinfo |
Mask of enabled RFC4620 node information query types |
net.inet6.icmp6.nodeinfo_oldmcprefix |
Join old IPv6 NI group address in draft-ietf-ipngwg-icmp-name-lookup for compatibility with KAME implementation |
net.inet6.icmp6.rediraccept |
Accept ICMPv6 redirect messages |
net.inet6.icmp6.redirtimeout |
Delay in seconds before expiring redirect route |
net.inet6.icmp6.stats |
ICMPv6 statistics (struct icmp6stat, netinet/icmp6.h) |
net.inet6.ip6 |
IP6 |
net.inet6.ip6.accept_rtadv |
Default value of per-interface flag for accepting ICMPv6 RA messages |
net.inet6.ip6.addrctlpolicy |
|
net.inet6.ip6.auto_flowlabel |
Provide an IPv6 flowlabel in outbound packets |
net.inet6.ip6.auto_linklocal |
Default value of per-interface flag for automatically adding an IPv6 link-local address to interfaces when attached |
net.inet6.ip6.connect_in6addr_wild |
Allow connecting to the unspecified address for connect(2) |
net.inet6.ip6.dad_count |
Number of ICMPv6 NS messages sent during duplicate address detection |
net.inet6.ip6.dad_enhanced |
Enable Enhanced DAD, which adds a random nonce to NS messages for DAD. |
net.inet6.ip6.defmcasthlim |
Default hop limit for IPv6 multicast packets originating from this node |
net.inet6.ip6.forwarding |
Enable forwarding of IPv6 packets between interfaces |
net.inet6.ip6.frag6_nfragpackets |
Per-VNET number of IPv6 fragments across all reassembly queues. |
net.inet6.ip6.frag6_nfrags |
Global number of IPv6 fragments across all reassembly queues. |
net.inet6.ip6.fraglifetime_ms |
Fragment lifetime, in milliseconds |
net.inet6.ip6.gifhlim |
Default hop limit for encapsulated packets |
net.inet6.ip6.hdrnestlimit |
Default maximum number of IPv6 extension headers permitted on incoming IPv6 packets, 0 for no artificial limit |
net.inet6.ip6.hlim |
Default hop limit to use for outgoing IPv6 packets |
net.inet6.ip6.intr_queue_maxlen |
Maximum size of the IPv6 input queue |
net.inet6.ip6.kame_version |
KAME version string |
net.inet6.ip6.log_cannot_forward |
Log packets that cannot be forwarded |
net.inet6.ip6.log_interval |
Frequency in seconds at which to log IPv6 forwarding errors |
net.inet6.ip6.maxfragbucketsize |
Maximum number of reassembly queues per hash bucket |
net.inet6.ip6.maxfragpackets |
Default maximum number of outstanding fragmented IPv6 packets. A value of 0 means no fragmented packets will be accepted, while a a value of -1 means no limit |
net.inet6.ip6.maxfrags |
Maximum allowed number of outstanding IPv6 packet fragments. A value of 0 means no fragmented packets will be accepted, while a value of -1 means no limit |
net.inet6.ip6.maxfragsperpacket |
Maximum allowed number of fragments per packet |
net.inet6.ip6.mcast |
IPv6 multicast |
net.inet6.ip6.mcast.filters |
Per-interface stack-wide source filters |
net.inet6.ip6.mcast.loop |
Loopback multicast datagrams by default |
net.inet6.ip6.mcast.maxgrpsrc |
Max source filters per group |
net.inet6.ip6.mcast.maxsocksrc |
Max source filters per socket |
net.inet6.ip6.mcast_pmtu |
Enable path MTU discovery for multicast packets |
net.inet6.ip6.no_radr |
Default value of per-interface flag to control whether routers sending ICMPv6 RA messages on that interface are added into the default router list |
net.inet6.ip6.norbit_raif |
Always set clear the R flag in ICMPv6 NA messages when accepting RA on the interface |
net.inet6.ip6.prefer_tempaddr |
Prefer RFC3041 temporary addresses in source address selection |
net.inet6.ip6.redirect |
Send ICMPv6 redirects for unforwardable IPv6 packets |
net.inet6.ip6.rfc6204w3 |
Accept the default router list from ICMPv6 RA messages even when packet forwarding is enabled |
net.inet6.ip6.rip6stats |
Raw IP6 statistics (struct rip6stat, netinet6/raw_ip6.h) |
net.inet6.ip6.rr_prune |
|
net.inet6.ip6.source_address_validation |
Drop incoming packets with source address that is a local address |
net.inet6.ip6.stats |
IP6 statistics (struct ip6stat, netinet6/ip6_var.h) |
net.inet6.ip6.temppltime |
Maximum preferred lifetime for temporary addresses |
net.inet6.ip6.tempvltime |
Maximum valid lifetime for temporary addresses |
net.inet6.ip6.use_defaultzone |
Use the default scope zone when none is specified |
net.inet6.ip6.use_deprecated |
Allow the use of addresses whose preferred lifetimes have expired |
net.inet6.ip6.use_tempaddr |
Create RFC3041 temporary addresses for autoconfigured addresses |
net.inet6.ip6.v6only |
Restrict AF_INET6 sockets to IPv6 addresses only |
net.inet6.ipsec6 |
IPSEC6 |
net.inet6.ipsec6.debug |
Enable IPsec debugging output when set. |
net.inet6.mld |
IPv6 Multicast Listener Discovery |
net.inet6.mld.gsrdelay |
Rate limit for MLDv2 Group-and-Source queries in seconds |
net.inet6.mld.ifinfo |
Per-interface MLDv2 state |
net.inet6.mld.use_allow |
Use ALLOW/BLOCK for RFC 4604 SSM joins/leaves |
net.inet6.mld.v1enable |
Enable fallback to MLDv1 |
net.inet6.mld.v2enable |
Enable MLDv2 |
net.inet6.sctp6 |
SCTP6 |
net.inet6.tcp6 |
TCP6 |
net.inet6.tcp6.getcred |
Get the xucred of a TCP6 connection |
net.inet6.udp6 |
UDP6 |
net.inet6.udp6.getcred |
Get the xucred of a UDP6 connection |
net.inet6.udp6.rfc6935_port |
Zero UDP checksum allowed for traffic to/from this port. |
net.isr |
netisr |
net.isr.bindthreads |
Bind netisr threads to CPUs. |
net.isr.defaultqlimit |
Default netisr per-protocol, per-CPU queue limit if not set by protocol |
net.isr.dispatch |
netisr dispatch policy |
net.isr.maxprot |
Compile-time limit on the number of protocols supported by netisr. |
net.isr.maxqlimit |
Maximum netisr per-protocol, per-CPU queue depth. |
net.isr.maxthreads |
Use at most this many CPUs for netisr processing |
net.isr.numthreads |
Number of extant netisr threads. |
net.isr.proto |
Return list of protocols registered with netisr |
net.isr.work |
Return list of per-workstream, per-protocol work in netisr |
net.isr.workstream |
Return list of workstreams implemented by netisr |
net.key |
Key Family |
net.key.ah_keymin |
|
net.key.blockacq_count |
|
net.key.blockacq_lifetime |
|
net.key.debug |
|
net.key.esp_auth |
|
net.key.esp_keymin |
|
net.key.int_random |
|
net.key.larval_lifetime |
|
net.key.preferred_oldsa |
|
net.key.recvspace |
Default key socket receive space |
net.key.sendspace |
Default key socket send space |
net.key.spdcache |
SPD cache |
net.key.spdcache.maxentries |
Maximum number of entries in the SPD cache (power of 2, 0 to disable) |
net.key.spdcache.threshold |
Number of SPs that make the SPD cache active |
net.key.spi_maxval |
|
net.key.spi_minval |
|
net.key.spi_trycnt |
|
net.link |
Link layers |
net.link.bridge |
Bridge |
net.link.bridge.allow_llz_overlap |
Allow overlap of link-local scope zones of a bridge interface and the member interfaces |
net.link.bridge.inherit_mac |
Inherit MAC address from the first bridge member |
net.link.bridge.ipfw |
Layer2 filter with IPFW |
net.link.bridge.ipfw_arp |
Filter ARP packets through IPFW layer2 |
net.link.bridge.log_mac_flap |
Log MAC address port flapping |
net.link.bridge.log_stp |
Log STP state changes |
net.link.bridge.pfil_bridge |
Packet filter on the bridge interface |
net.link.bridge.pfil_local_phys |
Packet filter on the physical interface for locally destined packets |
net.link.bridge.pfil_member |
Packet filter on the member interface |
net.link.bridge.pfil_onlyip |
Only pass IP packets when pfil is enabled |
net.link.ether |
Ethernet |
net.link.ether.arp |
|
net.link.ether.arp.log_level |
Minimum log(9) level for recording rate limited arp log messages. The higher will be log more (emerg=0, info=6 (default), debug=7). |
net.link.ether.arp.stats |
ARP statistics (struct arpstat, net/if_arp.h) |
net.link.ether.inet |
|
net.link.ether.inet.allow_multicast |
accept multicast addresses |
net.link.ether.inet.garp_rexmit_count |
Number of times to retransmit GARP packets; 0 to disable, maximum of 16 |
net.link.ether.inet.log_arp_movements |
log arp replies from MACs different than the one in the cache |
net.link.ether.inet.log_arp_permanent_modify |
log arp replies from MACs different than the one in the permanent arp entry |
net.link.ether.inet.log_arp_wrong_iface |
log arp packets arriving on the wrong interface |
net.link.ether.inet.max_age |
ARP entry lifetime in seconds |
net.link.ether.inet.max_log_per_second |
Maximum number of remotely triggered ARP messages that can be logged per second |
net.link.ether.inet.maxhold |
Number of packets to hold per ARP entry |
net.link.ether.inet.maxtries |
ARP resolution attempts before returning error |
net.link.ether.inet.proxyall |
Enable proxy ARP for all suitable requests |
net.link.ether.inet.wait |
Incomplete ARP entry lifetime in seconds |
net.link.generic |
Generic link-management |
net.link.generic.ifdata |
Interface table |
net.link.generic.system |
Variables global to all interfaces |
net.link.generic.system.ifcount |
Maximum known interface index |
net.link.gif |
Generic Tunnel Interface |
net.link.gif.max_nesting |
Max nested tunnels |
net.link.ifqmaxlen |
max send queue size |
net.link.log_link_state_change |
log interface link state change events |
net.link.log_promisc_mode_change |
log promiscuous mode change events |
net.link.openvpn |
OpenVPN DCO Interface |
net.link.openvpn.async_crypto |
Use asynchronous mode to parallelize crypto jobs. |
net.link.openvpn.netisr_queue |
Use netisr_queue() rather than netisr_dispatch(). |
net.link.openvpn.replay_protection |
Validate sequence numbers |
net.link.tap |
Ethernet tunnel software network interface |
net.link.tap.debug |
|
net.link.tap.devfs_cloning |
Enable legacy devfs interface creation |
net.link.tap.up_on_open |
Bring interface up when /dev/tap is opened |
net.link.tap.user_open |
Enable legacy devfs interface creation for all users |
net.link.tun |
IP tunnel software network interface |
net.link.tun.devfs_cloning |
Enable legacy devfs interface creation |
net.link.vlan |
IEEE 802.1Q VLAN |
net.link.vlan.link |
for consistency |
net.link.vlan.mtag_pcp |
Retain VLAN PCP information as packets are passed up the stack |
net.link.vlan.soft_pad |
pad short frames before tagging |
net.local |
Local domain |
net.local.deferred |
File descriptors deferred to taskqueue for close. |
net.local.dgram |
SOCK_DGRAM |
net.local.dgram.maxdgram |
Maximum datagram size. |
net.local.dgram.pcblist |
List of active local datagram sockets |
net.local.dgram.recvspace |
Default datagram receive space. |
net.local.inflight |
File descriptors in flight. |
net.local.recycled |
Number of unreachable sockets claimed by the garbage collector. |
net.local.seqpacket |
SOCK_SEQPACKET |
net.local.seqpacket.maxseqpacket |
Default seqpacket send space. |
net.local.seqpacket.pcblist |
List of active local seqpacket sockets |
net.local.seqpacket.recvspace |
Default seqpacket receive space. |
net.local.sockcount |
Number of active local sockets. |
net.local.stream |
SOCK_STREAM |
net.local.stream.pcblist |
List of active local stream sockets |
net.local.stream.recvspace |
Default stream receive space. |
net.local.stream.sendspace |
Default stream send space. |
net.local.taskcount |
Number of times the garbage collector has run. |
net.my_fibnum |
default FIB of caller |
net.netdump |
netdump parameters |
net.netdump.arp_retries |
Number of ARP attempts before giving up |
net.netdump.debug |
Debug message verbosity |
net.netdump.enabled |
netdump configuration status |
net.netdump.path |
Server path for output files |
net.netdump.polls |
Number of times to poll before assuming packet loss (0.5ms per poll) |
net.netdump.retries |
Number of retransmit attempts before giving up |
net.netlink |
RFC3549 Netlink network state socket family |
net.netlink.debug |
Netlink per-subsystem debug levels |
net.netlink.debug.nl_domain_debug_level |
debuglevel |
net.netlink.debug.nl_generic_debug_level |
debuglevel |
net.netlink.debug.nl_generic_kpi_debug_level |
debuglevel |
net.netlink.debug.nl_iface_debug_level |
debuglevel |
net.netlink.debug.nl_iface_drivers_debug_level |
debuglevel |
net.netlink.debug.nl_io_debug_level |
debuglevel |
net.netlink.debug.nl_linux_debug_level |
debuglevel |
net.netlink.debug.nl_mod_debug_level |
debuglevel |
net.netlink.debug.nl_neigh_debug_level |
debuglevel |
net.netlink.debug.nl_nhop_debug_level |
debuglevel |
net.netlink.debug.nl_parser_debug_level |
debuglevel |
net.netlink.debug.nl_route_core_debug_level |
debuglevel |
net.netlink.debug.nl_route_debug_level |
debuglevel |
net.netlink.debug.nl_sysevent_debug_level |
debuglevel |
net.netlink.debug.nl_writer_debug_level |
debuglevel |
net.netlink.nl_maxsockbuf |
Maximum Netlink socket buffer size |
net.netlink.recvspace |
Default netlink socket receive space |
net.netlink.sendspace |
Default netlink socket send space |
net.pf |
pf(4) |
net.pf.filter_local |
Enable filtering for packets delivered to local network stack |
net.pf.request_maxcount |
Maximum number of tables, addresses, ... in a single ioctl() call |
net.pf.rule_tag_hashsize |
Size of pf(4) rule tag hashtable |
net.pf.source_nodes_hashsize |
Size of pf(4) source nodes hashtable |
net.pf.states_hashsize |
Size of pf(4) states hashtable |
net.route |
|
net.route.algo |
Fib algorithm lookups |
net.route.algo.bucket_change_threshold_rate |
Minimum update rate to delay sync |
net.route.algo.bucket_time_ms |
Time interval to calculate update rate |
net.route.algo.debug_level |
debuglevel |
net.route.algo.fib_max_sync_delay_ms |
Maximum time to delay sync (ms) |
net.route.algo.inet |
IPv4 longest prefix match lookups |
net.route.algo.inet.algo |
Set IPv4 lookup algo |
net.route.algo.inet.algo_list |
List of IPv4 lookup algorithms |
net.route.algo.inet6 |
IPv6 longest prefix match lookups |
net.route.algo.inet6.algo |
Set IPv6 lookup algo |
net.route.algo.inet6.algo_list |
List of IPv6 lookup algorithms |
net.route.debug |
|
net.route.debug.nhgrp_ctl_debug_level |
debuglevel |
net.route.debug.nhgrp_debug_level |
debuglevel |
net.route.debug.nhop_ctl_debug_level |
debuglevel |
net.route.debug.nhop_debug_level |
debuglevel |
net.route.debug.route_ctl_debug_level |
debuglevel |
net.route.debug.rt_helpers_debug_level |
debuglevel |
net.route.debug.rtsock_debug_level |
debuglevel |
net.route.hash_outbound |
Compute flowid for locally-originated packets |
net.route.ipv6_nexthop |
Enable IPv4 route via IPv6 Next Hop address |
net.route.multipath |
Enable route multipath |
net.route.netisr_maxqlen |
maximum routing socket dispatch queue length |
net.route.stats |
route statistics |
net.routetable |
Return route tables and interface/address lists |
net.rtsock |
Routing socket infrastructure |
net.rtsock.recvspace |
Default routing socket receive space |
net.rtsock.sendspace |
Default routing socket send space |
net.wlan |
IEEE 80211 parameters |
net.wlan.[num] |
|
net.wlan.[num].%parent |
parent device |
net.wlan.[num].ampdu_mintraffic_be |
BE traffic tx aggr threshold (pps) |
net.wlan.[num].ampdu_mintraffic_bk |
BK traffic tx aggr threshold (pps) |
net.wlan.[num].ampdu_mintraffic_vi |
VI traffic tx aggr threshold (pps) |
net.wlan.[num].ampdu_mintraffic_vo |
VO traffic tx aggr threshold (pps) |
net.wlan.[num].amrr_max_sucess_threshold |
|
net.wlan.[num].amrr_min_sucess_threshold |
|
net.wlan.[num].amrr_rate_interval |
amrr operation interval (ms) |
net.wlan.[num].bmiss_max |
consecutive beacon misses before scanning |
net.wlan.[num].debug |
control debugging printfs |
net.wlan.[num].driver_caps |
driver capabilities |
net.wlan.[num].force_restart |
force a VAP restart |
net.wlan.[num].inact_auth |
station authentication timeout (sec) |
net.wlan.[num].inact_init |
station initial state timeout (sec) |
net.wlan.[num].inact_probe |
station inactivity probe timeout (sec) |
net.wlan.[num].inact_run |
station inactivity timeout (sec) |
net.wlan.[num].rate_stats |
ratectl node stats |
net.wlan.addba_backoff |
ADDBA request backoff (ms) |
net.wlan.addba_maxtries |
max ADDBA requests sent before backoff |
net.wlan.addba_timeout |
ADDBA request timeout (ms) |
net.wlan.ampdu_age |
AMPDU max reorder age (ms) |
net.wlan.cac_timeout |
CAC timeout (secs) |
net.wlan.debug |
debugging printfs |
net.wlan.devices |
names of available 802.11 devices |
net.wlan.hwmp |
IEEE 802.11s HWMP parameters |
net.wlan.hwmp.inact |
mesh route inactivity timeout (ms) |
net.wlan.hwmp.maxpreq_retries |
maximum number of preq retries |
net.wlan.hwmp.net_diameter_traversal_time |
estimate traversal time across the MBSS (ms) |
net.wlan.hwmp.pathlifetime |
path entry lifetime (ms) |
net.wlan.hwmp.rannint |
root announcement interval (ms) |
net.wlan.hwmp.rootconfint |
root confirmation interval (ms) (read-only) |
net.wlan.hwmp.rootint |
root interval (ms) |
net.wlan.hwmp.roottimeout |
root PREQ timeout (ms) |
net.wlan.hwmp.targetonly |
Set TO bit on generated PREQs |
net.wlan.mesh |
IEEE 802.11s parameters |
net.wlan.mesh.backofftimeout |
Backoff timeout (msec). This is to throutles peering forever when not receiving answer or is rejected by a neighbor |
net.wlan.mesh.confirmtimeout |
Confirm state timeout (msec) |
net.wlan.mesh.gateint |
mesh gate interval (ms) |
net.wlan.mesh.holdingtimeout |
Holding state timeout (msec) |
net.wlan.mesh.maxholding |
Maximum times we are allowed to transition to HOLDING state before backinoff during peer link establishment |
net.wlan.mesh.maxretries |
Maximum retries during peer link establishment |
net.wlan.mesh.retrytimeout |
Retry timeout (msec) |
net.wlan.nol_timeout |
NOL timeout (secs) |
net.wlan.recv_bar |
BAR frame processing (ena/dis) |
p1003_1b |
p1003_1b, (see p1003_1b.h) |
p1003_1b.aio_listio_max |
Maximum aio requests for a single lio_listio call |
p1003_1b.aio_max |
|
p1003_1b.aio_prio_delta_max |
|
p1003_1b.asynchronous_io |
|
p1003_1b.delaytimer_max |
|
p1003_1b.fsync |
|
p1003_1b.mapped_files |
|
p1003_1b.memlock |
|
p1003_1b.memlock_range |
|
p1003_1b.memory_protection |
|
p1003_1b.message_passing |
|
p1003_1b.mq_open_max |
|
p1003_1b.pagesize |
|
p1003_1b.prioritized_io |
|
p1003_1b.priority_scheduling |
|
p1003_1b.realtime_signals |
|
p1003_1b.rtsig_max |
|
p1003_1b.sem_nsems_max |
|
p1003_1b.sem_value_max |
|
p1003_1b.semaphores |
|
p1003_1b.shared_memory_objects |
|
p1003_1b.sigqueue_max |
|
p1003_1b.synchronized_io |
|
p1003_1b.timer_max |
|
p1003_1b.timers |
|
security |
Security |
security.audit |
TrustedBSD audit controls |
security.bsd |
BSD security policy |
security.bsd.allow_ptrace |
Deny ptrace(2) use by returning ENOSYS |
security.bsd.allow_read_dir |
Enable read(2) of directory by root for filesystems that support it |
security.bsd.conservative_signals |
Unprivileged processes prevented from sending certain signals to processes whose credentials have changed |
security.bsd.hardlink_check_gid |
Unprivileged processes cannot create hard links to files owned by other groups |
security.bsd.hardlink_check_uid |
Unprivileged processes cannot create hard links to files owned by other users |
security.bsd.map_at_zero |
Permit processes to map an object at virtual address 0. |
security.bsd.see_jail_proc |
Unprivileged processes may see subjects/objects with different jail ids |
security.bsd.see_other_gids |
Unprivileged processes may see subjects/objects with different real gid |
security.bsd.see_other_uids |
Unprivileged processes may see subjects/objects with different real uid |
security.bsd.stack_guard_page |
Specifies the number of guard pages for a stack that grows |
security.bsd.suser_enabled |
Processes with uid 0 have privilege |
security.bsd.unprivileged_chroot |
Unprivileged processes can use chroot(2) |
security.bsd.unprivileged_get_quota |
Unprivileged processes may retrieve quotas for other uids and gids |
security.bsd.unprivileged_idprio |
Allow non-root users to set an idle priority (deprecated) |
security.bsd.unprivileged_mlock |
Allow non-root users to call mlock(2) |
security.bsd.unprivileged_proc_debug |
Unprivileged processes may use process debugging facilities |
security.bsd.unprivileged_read_msgbuf |
Unprivileged processes may read the kernel message buffer |
security.jail |
Jails |
security.jail.allow_raw_sockets |
Prison root can create raw sockets (deprecated) |
security.jail.chflags_allowed |
Processes in jail can alter system file flags (deprecated) |
security.jail.children |
Limits and stats of child jails |
security.jail.children.cur |
Current number of child jails |
security.jail.children.max |
Maximum number of child jails |
security.jail.devfs_ruleset |
Ruleset for the devfs filesystem in jail (deprecated) |
security.jail.enforce_statfs |
Processes in jail cannot see all mounted file systems (deprecated) |
security.jail.env |
Meta information provided by parent jail |
security.jail.jail_max_af_ips |
Number of IP addresses a jail may have at most per address family (deprecated) |
security.jail.jailed |
Process in jail? |
security.jail.list |
List of active jails |
security.jail.meta_maxbufsize |
Maximum buffer size of each meta and env |
security.jail.mlock_allowed |
Processes in jail can lock/unlock physical pages in memory |
security.jail.mount_allowed |
Processes in jail can mount/unmount jail-friendly file systems (deprecated) |
security.jail.mount_devfs_allowed |
Jail may mount the devfs file system (deprecated) |
security.jail.mount_fdescfs_allowed |
Jail may mount the fdescfs file system (deprecated) |
security.jail.mount_fusefs_allowed |
Jail may mount the fusefs file system (deprecated) |
security.jail.mount_lindebugfs_allowed |
Jail may mount the lindebugfs file system (deprecated) |
security.jail.mount_linprocfs_allowed |
Jail may mount the linprocfs file system (deprecated) |
security.jail.mount_nullfs_allowed |
Jail may mount the nullfs file system (deprecated) |
security.jail.mount_procfs_allowed |
Jail may mount the procfs file system (deprecated) |
security.jail.mount_tmpfs_allowed |
Jail may mount the tmpfs file system (deprecated) |
security.jail.mount_zfs_allowed |
Jail may mount the zfs file system (deprecated) |
security.jail.param |
Jail parameters |
security.jail.param.allow |
Jail permission flags |
security.jail.param.allow.adjtime |
Jail may adjust system time |
security.jail.param.allow.chflags |
Jail may alter system file flags |
security.jail.param.allow.extattr |
Jail may set system-level filesystem extended attributes |
security.jail.param.allow.mlock |
Jail may lock (unlock) physical pages in memory |
security.jail.param.allow.mount |
Jail mount/unmount permission flags |
security.jail.param.allow.mount.[noname] |
Jail may mount/unmount jail-friendly file systems in general |
security.jail.param.allow.mount.devfs |
Jail may mount the devfs file system |
security.jail.param.allow.mount.fdescfs |
Jail may mount the fdescfs file system |
security.jail.param.allow.mount.fusefs |
Jail may mount the fusefs file system |
security.jail.param.allow.mount.lindebugfs |
Jail may mount the lindebugfs file system |
security.jail.param.allow.mount.linprocfs |
Jail may mount the linprocfs file system |
security.jail.param.allow.mount.nullfs |
Jail may mount the nullfs file system |
security.jail.param.allow.mount.procfs |
Jail may mount the procfs file system |
security.jail.param.allow.mount.tmpfs |
Jail may mount the tmpfs file system |
security.jail.param.allow.mount.zfs |
Jail may mount the zfs file system |
security.jail.param.allow.nfsd |
Mountd/nfsd may run in the jail |
security.jail.param.allow.quotas |
Jail may set file quotas |
security.jail.param.allow.raw_sockets |
Jail may create raw sockets |
security.jail.param.allow.read_msgbuf |
Jail may read the kernel message buffer |
security.jail.param.allow.reserved_ports |
Jail may bind sockets to reserved ports |
security.jail.param.allow.set_hostname |
Jail may set hostname |
security.jail.param.allow.settime |
Jail may set system time |
security.jail.param.allow.socket_af |
Jail may create sockets other than just UNIX/IPv4/IPv6/route |
security.jail.param.allow.suser |
Processes in jail with uid 0 have privilege |
security.jail.param.allow.sysvipc |
Jail may use SYSV IPC |
security.jail.param.allow.unprivileged_proc_debug |
Unprivileged processes may use process debugging facilities |
security.jail.param.allow.vmm |
Allow use of vmm in a jail. |
security.jail.param.children |
Number of child jails |
security.jail.param.children.cur |
Current number of child jails |
security.jail.param.children.max |
Maximum number of child jails |
security.jail.param.cpuset |
Jail cpuset |
security.jail.param.cpuset.id |
Jail cpuset ID |
security.jail.param.devfs_ruleset |
Ruleset for in-jail devfs mounts |
security.jail.param.dying |
Jail is in the process of shutting down |
security.jail.param.enforce_statfs |
Jail cannot see all mounted file systems |
security.jail.param.env |
Jail meta information readable by the jail |
security.jail.param.host |
Jail host info |
security.jail.param.host.[noname] |
Jail host info |
security.jail.param.host.domainname |
Jail NIS domainname |
security.jail.param.host.hostid |
Jail host ID |
security.jail.param.host.hostname |
Jail hostname |
security.jail.param.host.hostuuid |
Jail host UUID |
security.jail.param.ip4 |
Jail IPv4 address virtualization |
security.jail.param.ip4.[noname] |
Jail IPv4 address virtualization |
security.jail.param.ip4.addr |
Jail IPv4 addresses |
security.jail.param.ip4.saddrsel |
Do (not) use IPv4 source address selection rather than the primary jail IPv4 address. |
security.jail.param.ip6 |
Jail IPv6 address virtualization |
security.jail.param.ip6.[noname] |
Jail IPv6 address virtualization |
security.jail.param.ip6.addr |
Jail IPv6 addresses |
security.jail.param.ip6.saddrsel |
Do (not) use IPv6 source address selection rather than the primary jail IPv6 address. |
security.jail.param.jid |
Jail ID |
security.jail.param.linux |
Jail Linux parameters |
security.jail.param.linux.[noname] |
Jail Linux parameters |
security.jail.param.linux.osname |
Jail Linux kernel OS name |
security.jail.param.linux.osrelease |
Jail Linux kernel OS release |
security.jail.param.linux.oss_version |
Jail Linux OSS version |
security.jail.param.mac |
Jail parameters for MAC policy controls |
security.jail.param.meta |
Jail meta information hidden from the jail |
security.jail.param.name |
Jail name |
security.jail.param.osreldate |
Jail value for kern.osreldate and uname -K |
security.jail.param.osrelease |
Jail value for kern.osrelease and uname -r |
security.jail.param.parent |
Jail parent ID |
security.jail.param.path |
Jail root path |
security.jail.param.persist |
Jail persistence |
security.jail.param.securelevel |
Jail secure level |
security.jail.param.sysvmsg |
SYSV message queues |
security.jail.param.sysvmsg.[noname] |
SYSV message queues |
security.jail.param.sysvsem |
SYSV semaphores |
security.jail.param.sysvsem.[noname] |
SYSV semaphores |
security.jail.param.sysvshm |
SYSV shared memory |
security.jail.param.sysvshm.[noname] |
SYSV shared memory |
security.jail.param.vnet |
Virtual network stack |
security.jail.param.zfs |
Jail ZFS parameters |
security.jail.param.zfs.[noname] |
Jail ZFS parameters |
security.jail.param.zfs.mount_snapshot |
Allow mounting snapshots in the .zfs directory for unjailed datasets |
security.jail.set_hostname_allowed |
Processes in jail can set their hostnames (deprecated) |
security.jail.socket_unixiproute_only |
Processes in jail are limited to creating UNIX/IP/route sockets only (deprecated) |
security.jail.sysvipc_allowed |
Processes in jail can use System V IPC primitives (deprecated) |
security.jail.vmm_allowed |
Allow use of vmm in a jail. (deprecated) |
security.jail.vnet |
Jail owns vnet? |
security.mac |
TrustedBSD MAC policy controls |
security.mac.labeled |
Mask of object types being labeled |
security.mac.max_slots |
|
security.mac.mmap_revocation |
Revoke mmap access to files on subject relabel |
security.mac.mmap_revocation_via_cow |
Revoke mmap access to files via copy-on-write semantics, or by removing all write access |
security.mac.ntpd |
mac_ntpd policy controls |
security.mac.ntpd.enabled |
Enable mac_ntpd policy |
security.mac.ntpd.uid |
User id for ntpd user |
security.mac.portacl |
TrustedBSD mac_portacl policy controls |
security.mac.portacl.autoport_exempt |
Allow automatic allocation through binding port 0 if not IP_PORTRANGELOW |
security.mac.portacl.enabled |
Enforce portacl policy |
security.mac.portacl.port_high |
Highest port to enforce for |
security.mac.portacl.rules |
Rules |
security.mac.portacl.suser_exempt |
Privilege permits binding of any port |
security.mac.version |
|
sys |
sys |
sys.class |
class |
sys.class.drm |
drm |
sys.class.drm.card0 |
card0 |
sys.class.drm.card0-DP-1 |
card0-DP-1 |
sys.class.drm.card0-DP-2 |
card0-DP-2 |
sys.class.drm.card0-HDMI-A-1 |
card0-HDMI-A-1 |
sys.class.drm.card0-eDP-1 |
card0-eDP-1 |
sys.class.drm.renderD128 |
renderD128 |
sys.class.drm.ttm |
ttm |
sys.class.drm.ttm.buffer_objects |
buffer_objects |
sys.class.drm.ttm.buffer_objects.bo_count |
|
sys.class.drm.ttm.memory_accounting |
memory_accounting |
sys.class.drm.ttm.memory_accounting.dma32 |
dma32 |
sys.class.drm.ttm.memory_accounting.dma32.available_memory |
|
sys.class.drm.ttm.memory_accounting.dma32.emergency_memory |
|
sys.class.drm.ttm.memory_accounting.dma32.swap_limit |
|
sys.class.drm.ttm.memory_accounting.dma32.used_memory |
|
sys.class.drm.ttm.memory_accounting.dma32.zone_memory |
|
sys.class.drm.ttm.memory_accounting.kernel |
kernel |
sys.class.drm.ttm.memory_accounting.kernel.available_memory |
|
sys.class.drm.ttm.memory_accounting.kernel.emergency_memory |
|
sys.class.drm.ttm.memory_accounting.kernel.swap_limit |
|
sys.class.drm.ttm.memory_accounting.kernel.used_memory |
|
sys.class.drm.ttm.memory_accounting.kernel.zone_memory |
|
sys.class.drm.ttm.memory_accounting.lower_mem_limit |
|
sys.class.drm.ttm.memory_accounting.pool |
pool |
sys.class.drm.ttm.memory_accounting.pool.pool_allocation_size |
|
sys.class.drm.ttm.memory_accounting.pool.pool_max_size |
|
sys.class.drm.ttm.memory_accounting.pool.pool_small_allocation |
|
sys.class.misc |
misc |
sys.device |
device |
sys.device.drmn0 |
drmn0 |
sys.device.drmn0.fw_version |
fw_version |
sys.device.drmn0.fw_version.asd_fw_version |
|
sys.device.drmn0.fw_version.ce_fw_version |
|
sys.device.drmn0.fw_version.dmcu_fw_version |
|
sys.device.drmn0.fw_version.imu_fw_version |
|
sys.device.drmn0.fw_version.mc_fw_version |
|
sys.device.drmn0.fw_version.me_fw_version |
|
sys.device.drmn0.fw_version.mec2_fw_version |
|
sys.device.drmn0.fw_version.mec_fw_version |
|
sys.device.drmn0.fw_version.pfp_fw_version |
|
sys.device.drmn0.fw_version.rlc_fw_version |
|
sys.device.drmn0.fw_version.rlc_srlc_fw_version |
|
sys.device.drmn0.fw_version.rlc_srlg_fw_version |
|
sys.device.drmn0.fw_version.rlc_srls_fw_version |
|
sys.device.drmn0.fw_version.sdma2_fw_version |
|
sys.device.drmn0.fw_version.sdma_fw_version |
|
sys.device.drmn0.fw_version.smc_fw_version |
|
sys.device.drmn0.fw_version.sos_fw_version |
|
sys.device.drmn0.fw_version.ta_ras_fw_version |
|
sys.device.drmn0.fw_version.ta_xgmi_fw_version |
|
sys.device.drmn0.fw_version.uvd_fw_version |
|
sys.device.drmn0.fw_version.vce_fw_version |
|
sys.device.drmn0.fw_version.vcn_fw_version |
|
sys.device.drmn0.gpu_busy_percent |
|
sys.device.drmn0.gpu_metrics |
|
sys.device.drmn0.mem_info_gtt_total |
|
sys.device.drmn0.mem_info_gtt_used |
|
sys.device.drmn0.mem_info_preempt_used |
|
sys.device.drmn0.mem_info_vis_vram_total |
|
sys.device.drmn0.mem_info_vis_vram_used |
|
sys.device.drmn0.mem_info_vram_total |
|
sys.device.drmn0.mem_info_vram_used |
|
sys.device.drmn0.mem_info_vram_vendor |
|
sys.device.drmn0.pcie_replay_count |
|
sys.device.drmn0.power_dpm_force_performance_level |
|
sys.device.drmn0.power_dpm_state |
|
sys.device.drmn0.pp_cur_state |
|
sys.device.drmn0.pp_dpm_dcefclk |
|
sys.device.drmn0.pp_dpm_fclk |
|
sys.device.drmn0.pp_dpm_mclk |
|
sys.device.drmn0.pp_dpm_pcie |
|
sys.device.drmn0.pp_dpm_sclk |
|
sys.device.drmn0.pp_dpm_socclk |
|
sys.device.drmn0.pp_force_state |
|
sys.device.drmn0.pp_mclk_od |
|
sys.device.drmn0.pp_num_states |
|
sys.device.drmn0.pp_od_clk_voltage |
|
sys.device.drmn0.pp_power_profile_mode |
|
sys.device.drmn0.pp_sclk_od |
|
sys.device.drmn0.pp_table |
|
sys.device.drmn0.product_name |
|
sys.device.drmn0.product_number |
|
sys.device.drmn0.serial_number |
|
sys.device.drmn0.thermal_throttling_logging |
|
sys.device.drmn0.vbios_version |
|
sys.device.iwlwifi0 |
iwlwifi0 |
sys.device.nvidia0 |
nvidia0 |
sys.device.vgapci0 |
vgapci0 |
sysctl |
Sysctl internal magic |
sysctl.fakenextobjleafnoskip |
Next object (only leaf) avoiding SKIP flag |
sysctl.name |
|
sysctl.name2oid |
|
sysctl.next |
|
sysctl.nextnoskip |
|
sysctl.nextobjleaf |
Next object (only leaf) |
sysctl.nextobjleaf_byname |
Next object name (only leaf) |
sysctl.nextobjnode |
Next object (internal node or leaf) |
sysctl.nextobjnode_byname |
Next object name (internal node or leaf) |
sysctl.objdesc |
Object description string |
sysctl.objdesc_byname |
Object description by its name |
sysctl.objfakeid_byname |
Object (possible fake) ID by its name |
sysctl.objfakename |
Object (possible fake) name |
sysctl.objfmt |
Object format string |
sysctl.objfmt_byname |
object format string by its name |
sysctl.objhashandler |
Object has a handler |
sysctl.objhashandler_byname |
Object has a handler by its name |
sysctl.objid_byname |
Object ID by its name |
sysctl.objidextended_byname |
Object ID by its name eventually extended |
sysctl.objkind |
Object kind (flags and type) |
sysctl.objkind_byname |
Object kind by its name |
sysctl.objlabel |
Object label |
sysctl.objlabel_byname |
Object label by its name |
sysctl.objname |
Object name by its OID |
sysctl.oiddescr |
|
sysctl.oidfmt |
|
sysctl.oidlabel |
|
sysctl.serobj |
Serialize object properties |
sysctl.serobj_byname |
Serialize object by its name |
sysctl.serobjnextleaf |
Serialize object with next-leaf-OID |
sysctl.serobjnextleaf_byname |
Serialize object by its name with next-leaf-name |
sysctl.serobjnextnode |
Serialize object with next-node-OID |
sysctl.serobjnextnode_byname |
Serialize object by its name with next-node-name |
user |
user-level |
user.bc_base_max |
Max ibase/obase values in bc(1) |
user.bc_dim_max |
Max array size in bc(1) |
user.bc_scale_max |
Max scale value in bc(1) |
user.bc_string_max |
Max string length in bc(1) |
user.coll_weights_max |
Maximum number of weights assigned to an LC_COLLATE locale entry |
user.cs_path |
PATH that finds all the standard utilities |
user.expr_nest_max |
|
user.line_max |
Max length (bytes) of a text-processing utility's input line |
user.localbase |
Prefix used to install and locate add-on packages |
user.posix2_c_bind |
Whether C development supports the C bindings option |
user.posix2_c_dev |
Whether system supports the C development utilities option |
user.posix2_char_term |
|
user.posix2_fort_dev |
Whether system supports FORTRAN development utilities |
user.posix2_fort_run |
Whether system supports FORTRAN runtime utilities |
user.posix2_localedef |
Whether system supports creation of locales |
user.posix2_sw_dev |
Whether system supports software development utilities |
user.posix2_upe |
Whether system supports the user portability utilities |
user.posix2_version |
The version of POSIX 1003.2 with which the system attempts to comply |
user.re_dup_max |
Maximum number of repeats of a regexp permitted |
user.stream_max |
Min Maximum number of streams a process may have open at one time |
user.tzname_max |
Min Maximum number of types supported for timezone names |
vfs |
File system |
vfs.acl_nfs4_old_semantics |
Use pre-PSARC/2010/029 NFSv4 ACL semantics |
vfs.aio |
Async IO management |
vfs.aio.aiod_lifetime |
Maximum lifetime for idle aiod |
vfs.aio.enable_unsafe |
Permit asynchronous IO on all file types, not just known-safe types |
vfs.aio.max_aio_per_proc |
Maximum active aio requests per process |
vfs.aio.max_aio_procs |
Maximum number of kernel processes to use for handling async IO |
vfs.aio.max_aio_queue |
Maximum number of aio requests to queue, globally |
vfs.aio.max_aio_queue_per_proc |
Maximum queued aio requests per process |
vfs.aio.max_buf_aio |
Maximum buf aio requests per process |
vfs.aio.num_aio_procs |
Number of presently active kernel processes for async IO |
vfs.aio.num_buf_aio |
Number of aio requests presently handled by the buf subsystem |
vfs.aio.num_queue_count |
Number of queued aio requests |
vfs.aio.num_unmapped_aio |
Number of aio requests presently handled by unmapped I/O buffers |
vfs.aio.target_aio_procs |
Preferred number of ready kernel processes for async IO |
vfs.aio.unsafe_warningcnt |
Warnings that will be triggered upon failed IO requests on unsafe files |
vfs.altbufferflushes |
Number of fsync flushes to limit dirty buffers |
vfs.autofs |
Automounter filesystem |
vfs.autofs.cache |
Number of seconds to wait before reinvoking automountd(8) for any given file or directory |
vfs.autofs.debug |
Enable debug messages |
vfs.autofs.interruptible |
Allow requests to be interrupted by signal |
vfs.autofs.mount_on_stat |
Trigger mount on stat(2) on mountpoint |
vfs.autofs.retry_attempts |
Number of attempts before failing mount |
vfs.autofs.retry_delay |
Number of seconds before retrying |
vfs.autofs.timeout |
Number of seconds to wait for automountd(8) |
vfs.barrierwrites |
Number of barrier writes |
vfs.bdwriteskip |
Number of buffers supplied to bdwrite with snapshot deadlock risk |
vfs.buf_pager_relbuf |
Make buffer pager release buffers after reading |
vfs.bufdefragcnt |
Number of times we have had to repeat buffer allocation to defragment |
vfs.buffreekvacnt |
Number of times we have freed the KVA space from some buffer |
vfs.bufkvaspace |
Kernel virtual memory used for buffers |
vfs.bufmallocspace |
Amount of malloced memory for buffers |
vfs.bufspace |
Physical memory used for buffers |
vfs.bufspacethresh |
Bufspace consumed before waking the daemon to free some |
vfs.cache |
Name cache |
vfs.cache.debug |
Name cache debugging |
vfs.cache.debug.vnodes_cel_3_failures |
Number of times 3-way vnode locking failed |
vfs.cache.debug.zap_bucket_fail |
|
vfs.cache.debug.zap_bucket_fail2 |
|
vfs.cache.debug.zap_bucket_relock_success |
Number of successful removals after relocking |
vfs.cache.nchstats |
VFS cache effectiveness statistics |
vfs.cache.neg |
Name cache negative entry statistics |
vfs.cache.neg.count |
Number of negative cache entries |
vfs.cache.neg.created |
Number of created negative entries |
vfs.cache.neg.evict_skipped_contended |
Number of times evicting failed due to contention |
vfs.cache.neg.evict_skipped_empty |
Number of times evicting failed due to lack of entries |
vfs.cache.neg.evict_skipped_missed |
Number of times evicting failed due to target entry disappearing |
vfs.cache.neg.evicted |
Number of evicted negative entries |
vfs.cache.neg.hits |
Number of cache hits (negative) |
vfs.cache.neg.hot |
Number of hot negative entries |
vfs.cache.param |
Name cache parameters |
vfs.cache.param.fast_lookup |
|
vfs.cache.param.negfactor |
Ratio of negative namecache entries |
vfs.cache.param.negmin |
Negative entry count above which automatic eviction is allowed |
vfs.cache.param.negminpct |
Negative entry % of namecache capacity above which automatic eviction is allowed |
vfs.cache.param.size |
Total namecache capacity |
vfs.cache.param.sizefactor |
Size factor for namecache |
vfs.cache.stats |
Name cache statistics |
vfs.cache.stats.count |
Number of cache entries |
vfs.cache.stats.dotdothis |
Number of '..' hits |
vfs.cache.stats.dothits |
Number of '.' hits |
vfs.cache.stats.drops |
Number of dropped entries due to reaching the limit |
vfs.cache.stats.fullpathcalls |
Number of fullpath search calls |
vfs.cache.stats.fullpathfail1 |
Number of fullpath search errors (ENOTDIR) |
vfs.cache.stats.fullpathfail2 |
Number of fullpath search errors (VOP_VPTOCNP failures) |
vfs.cache.stats.fullpathfail4 |
Number of fullpath search errors (ENOMEM) |
vfs.cache.stats.fullpathfound |
Number of successful fullpath calls |
vfs.cache.stats.heldvnodes |
Number of namecache entries with vnodes held |
vfs.cache.stats.hitpct |
Percentage of hits |
vfs.cache.stats.miss |
Number of cache misses |
vfs.cache.stats.misszap |
Number of cache misses we do not want to cache |
vfs.cache.stats.neg |
Number of negative cache entries |
vfs.cache.stats.neghits |
Number of cache hits (negative) |
vfs.cache.stats.negzaps |
Number of cache hits (negative) we do not want to cache |
vfs.cache.stats.poshits |
Number of cache hits (positive) |
vfs.cache.stats.posszaps |
Number of cache hits (positive) we do not want to cache |
vfs.cache.stats.poszaps |
Number of cache hits (positive) we do not want to cache |
vfs.cache.stats.symlinktoobig |
Number of times symlink did not fit the cache |
vfs.cache_fast_lookup |
|
vfs.cd9660 |
cd9660 filesystem |
vfs.cd9660.use_buf_pager |
Use buffer pager instead of bmap |
vfs.conflist |
List of all configured filesystems |
vfs.ctl |
Sysctl by fsid |
vfs.default_autoro |
Retry failed r/w mount as r/o if no explicit ro/rw option is specified |
vfs.deferred_inact |
Number of times inactive processing was deferred |
vfs.deferred_unmount |
deferred unmount controls |
vfs.deferred_unmount.retry_delay_hz |
Delay in units of [1/kern.hz]s when retrying a failed deferred unmount |
vfs.deferred_unmount.retry_limit |
Maximum number of retries for deferred unmount failure |
vfs.deferred_unmount.total_retries |
Total number of retried deferred unmounts |
vfs.devfs |
DEVFS filesystem |
vfs.devfs.dotimes |
Update timestamps on DEVFS with default precision |
vfs.devfs.generation |
DEVFS generation number |
vfs.devfs.rule_depth |
Max depth of ruleset include |
vfs.dirtybufferflushes |
Number of bdwrite to bawrite conversions to limit dirty buffers |
vfs.dirtybufthresh |
Number of bdwrite to bawrite conversions to clear dirty buffers |
vfs.ffs |
FFS filesystem |
vfs.ffs.adjblkcnt |
Adjust Inode Used Blocks Count |
vfs.ffs.adjdepth |
Adjust Directory Inode Depth |
vfs.ffs.adjnbfree |
Adjust number of free blocks |
vfs.ffs.adjndir |
Adjust number of directories |
vfs.ffs.adjnffree |
Adjust number of free frags |
vfs.ffs.adjnifree |
Adjust number of free inodes |
vfs.ffs.adjnumclusters |
Adjust number of free clusters |
vfs.ffs.adjrefcnt |
Adjust Inode Reference Count |
vfs.ffs.compute_summary_at_mount |
Recompute summary at mount |
vfs.ffs.doasyncfree |
do not force synchronous writes when blocks are reallocated |
vfs.ffs.doasyncinodeinit |
Perform inode block initialization using asynchronous writes |
vfs.ffs.doreallocblks |
enable block reallocation |
vfs.ffs.dotrimcons |
enable BIO_DELETE / TRIM consolidation |
vfs.ffs.enxio_enable |
enable mapping of other disk I/O errors to ENXIO |
vfs.ffs.freeblks |
Free Range of Blocks |
vfs.ffs.freedirs |
Free Range of Directory Inodes |
vfs.ffs.freefiles |
Free Range of File Inodes |
vfs.ffs.maxclustersearch |
max number of cylinder group to search for contigous blocks |
vfs.ffs.prttimechgs |
print UFS1 time changes made to inodes |
vfs.ffs.setcwd |
Set Current Working Directory |
vfs.ffs.setdotdot |
Change Value of .. Entry |
vfs.ffs.setflags |
Change Filesystem Flags |
vfs.ffs.setsize |
Set the inode size |
vfs.ffs.unlink |
Unlink a Duplicate Name |
vfs.ffs.use_buf_pager |
Always use buffer pager instead of bmap |
vfs.flushbufqtarget |
Amount of work to do in flushbufqueues when helping bufdaemon |
vfs.flushwithdeps |
Number of buffers flushed with dependencies that require rollbacks |
vfs.freevnodes |
Number of "free" vnodes (legacy) |
vfs.fusefs |
FUSE tunables |
vfs.fusefs.data_cache_mode |
Zero: disable caching of FUSE file data; One: write-through caching (default); Two: write-back caching (generally unsafe) |
vfs.fusefs.enforce_dev_perms |
enforce fuse device permissions for secondary mounts |
vfs.fusefs.iov_credit |
how many times is an oversized fuse_iov tolerated |
vfs.fusefs.iov_permanent_bufsize |
limit for permanently stored buffer size for fuse_iovs |
vfs.fusefs.kernelabi_major |
FUSE kernel abi major version |
vfs.fusefs.kernelabi_minor |
FUSE kernel abi minor version |
vfs.fusefs.stats |
FUSE statistics |
vfs.fusefs.stats.filehandle_count |
number of open FUSE filehandles |
vfs.fusefs.stats.lookup_cache_hits |
number of positive cache hits in lookup |
vfs.fusefs.stats.lookup_cache_misses |
number of cache misses in lookup |
vfs.fusefs.stats.node_count |
Count of FUSE vnodes |
vfs.fusefs.stats.ticket_count |
Number of allocated tickets |
vfs.generic |
Generic filesystem |
vfs.getnewbufcalls |
Number of calls to getnewbuf |
vfs.getnewbufrestarts |
Number of times getnewbuf has had to restart a buffer acquisition |
vfs.hibufspace |
Maximum allowed value of bufspace (excluding metadata) |
vfs.hidirtybuffers |
When the number of dirty buffers is considered severe |
vfs.hifreebuffers |
Threshold for clean buffer recycling |
vfs.hirunningspace |
Maximum amount of space to use for in-progress I/O |
vfs.ino64_trunc_error |
Error on truncation of device, file or inode number, or link count |
vfs.lobufspace |
Minimum amount of buffers we want to have |
vfs.lodirtybuffers |
How many buffers we want to have free before bufdaemon can sleep |
vfs.lofreebuffers |
Target number of free buffers |
vfs.lookup_cap_dotdot |
enables ".." components in path lookup in capability mode |
vfs.lookup_cap_dotdot_nonlocal |
enables ".." components in path lookup in capability mode on non-local mount |
vfs.lorunningspace |
Minimum preferred space used for in-progress I/O |
vfs.mappingrestarts |
Number of times getblk has had to restart a buffer mapping for unmapped buffer |
vfs.maxbcachebuf |
Maximum size of a buffer cache block |
vfs.maxbufspace |
Maximum allowed value of bufspace (including metadata) |
vfs.maxmallocbufspace |
Maximum amount of malloced memory for buffers |
vfs.msdosfs |
msdos filesystem |
vfs.msdosfs.use_buf_pager |
Use buffer pager instead of bmap |
vfs.nfs |
NFS filesystem |
vfs.nfs.access_cache_timeout |
NFS ACCESS cache timeout |
vfs.nfs.bufpackets |
Buffer reservation size 2 < x < 64 |
vfs.nfs.callback_addr |
NFSv4 callback addr for server to use |
vfs.nfs.clean_pages_on_close |
NFS clean dirty pages on close |
vfs.nfs.commit_on_close |
write+commit on close, else only write |
vfs.nfs.debuglevel |
Debug level for NFS client |
vfs.nfs.defect |
Allow nfsiods to migrate serving different mounts |
vfs.nfs.diskless_rootaddr |
Diskless root nfs address |
vfs.nfs.diskless_rootpath |
Path to nfs root |
vfs.nfs.diskless_valid |
Has the diskless struct been filled correctly |
vfs.nfs.downdelayinitial |
|
vfs.nfs.downdelayinterval |
|
vfs.nfs.dsretries |
Number of retries for a DS RPC before failure |
vfs.nfs.dssameconn |
Use same TCP connection to multiple DSs |
vfs.nfs.enable_uidtostring |
Make nfs always send numeric owner_names |
vfs.nfs.fileid_maxwarnings |
Limit fileid corruption warnings; 0 is off; -1 is unlimited |
vfs.nfs.ignore_eexist |
NFS ignore EEXIST replies for mkdir/symlink |
vfs.nfs.iodmax |
Max number of nfsiod kthreads |
vfs.nfs.iodmaxidle |
Max number of seconds an nfsiod kthread will sleep before exiting |
vfs.nfs.iodmin |
Min number of nfsiod kthreads to keep as spares |
vfs.nfs.maxalloclen |
NFS max allocate/deallocate length |
vfs.nfs.maxcopyrange |
Max size of a Copy so RPC times reasonable |
vfs.nfs.nfs3_jukebox_delay |
Number of seconds to delay a retry after receiving EJUKEBOX |
vfs.nfs.nfs_directio_allow_mmap |
Enable mmaped IO on file with O_DIRECT opens |
vfs.nfs.nfs_directio_enable |
Enable NFS directio |
vfs.nfs.nfs_ip_paranoia |
|
vfs.nfs.nfs_keep_dirty_on_error |
Retry pageout if error returned |
vfs.nfs.pnfsiothreads |
Number of pNFS mirror I/O threads |
vfs.nfs.pnfsmirror |
Mirror level for pNFS service |
vfs.nfs.prime_access_cache |
Prime NFS ACCESS cache when fetching attributes |
vfs.nfs.realign_count |
Number of mbuf realignments done |
vfs.nfs.realign_test |
Number of realign tests done |
vfs.nfs.reconnects |
Number of times the nfs client has had to reconnect |
vfs.nfs.skip_wcc_data_onerr |
Disable weak cache consistency checking when server returns an error |
vfs.nfs.use_buf_pager |
Use buffer pager instead of direct readrpc call |
vfs.nfs.userhashsize |
Size of hash tables for uid/name mapping |
vfs.nfsd |
NFS server |
vfs.nfsd.allowreadforwriteopen |
Allow Reads to be done with Write Access StateIDs |
vfs.nfsd.async |
Tell client that writes were synced even though they were not |
vfs.nfsd.cachetcp |
Enable the DRC for NFS over TCP |
vfs.nfsd.clienthashsize |
Size of client hash table set via loader.conf |
vfs.nfsd.commit_blks |
|
vfs.nfsd.commit_miss |
|
vfs.nfsd.debuglevel |
Debug level for NFS server |
vfs.nfsd.default_flexfile |
Make Flex File Layout the default for pNFS |
vfs.nfsd.dsdirsize |
Number of dsN subdirs on the DS servers |
vfs.nfsd.enable_checkutf8 |
Enable the NFSv4 check for the UTF8 compliant name required by rfc3530 |
vfs.nfsd.enable_locallocks |
Enable nfsd to acquire local locks on files |
vfs.nfsd.enable_nobodycheck |
Enable the NFSv4 check when setting user nobody as owner |
vfs.nfsd.enable_nogroupcheck |
Enable the NFSv4 check when setting group nogroup as owner |
vfs.nfsd.enable_stringtouid |
Enable nfsd to accept numeric owner_names |
vfs.nfsd.enable_v42allocate |
Enable NFSv4.2 Allocate operation |
vfs.nfsd.fha |
NFS File Handle Affinity (FHA) |
vfs.nfsd.fha.bin_shift |
Maximum locality distance 2^(bin_shift) bytes |
vfs.nfsd.fha.enable |
Enable NFS File Handle Affinity (FHA) |
vfs.nfsd.fha.fhe_stats |
|
vfs.nfsd.fha.max_nfsds_per_fh |
Maximum nfsd threads that should be working on requests for the same file handle |
vfs.nfsd.fha.max_reqs_per_nfsd |
Maximum requests that single nfsd thread should be working on at any time |
vfs.nfsd.fha.read |
Enable NFS FHA read locality |
vfs.nfsd.fha.write |
Enable NFS FHA write locality |
vfs.nfsd.fhhashsize |
Size of file handle hash table set via loader.conf |
vfs.nfsd.flexlinuxhack |
For Linux clients, hack around Flex File Layout bug |
vfs.nfsd.groups |
Number of thread groups |
vfs.nfsd.issue_delegations |
Enable nfsd to issue delegations |
vfs.nfsd.layouthighwater |
High water mark for number of layouts set via loader.conf |
vfs.nfsd.linux42server |
Enable Linux style NFSv4.2 server (non-RFC compliant) |
vfs.nfsd.maxcopyrange |
Max size of a Copy so RPC times reasonable |
vfs.nfsd.maxthreads |
Maximal number of threads |
vfs.nfsd.minthreads |
Minimal number of threads |
vfs.nfsd.mirrormnt |
Enable nfsd to cross mount points |
vfs.nfsd.nfs_privport |
Only allow clients using a privileged port for NFSv2, 3 and 4 |
vfs.nfsd.owner_major |
Server owner major |
vfs.nfsd.owner_minor |
Server owner minor |
vfs.nfsd.pnfsgetdsattr |
When set getattr gets DS attributes via RPC |
vfs.nfsd.pnfsstrictatime |
For pNFS service, do Getattr ops to keep atime up-to-date |
vfs.nfsd.request_space_high |
Maximum space in parsed but not handled requests. |
vfs.nfsd.request_space_low |
Low water mark for request space. |
vfs.nfsd.request_space_throttle_count |
Count of times throttling based on request space has occurred |
vfs.nfsd.request_space_throttled |
Whether nfs requests are currently throttled |
vfs.nfsd.request_space_used |
Space in parsed but not handled requests. |
vfs.nfsd.request_space_used_highest |
Highest space used since reboot. |
vfs.nfsd.scope |
Server scope |
vfs.nfsd.server_max_minorversion4 |
The highest minor version of NFSv4 handled by the server |
vfs.nfsd.server_max_nfsvers |
The highest version of NFS handled by the server |
vfs.nfsd.server_min_minorversion4 |
The lowest minor version of NFSv4 handled by the server |
vfs.nfsd.server_min_nfsvers |
The lowest version of NFS handled by the server |
vfs.nfsd.sessionhashsize |
Size of session hash table set via loader.conf |
vfs.nfsd.srvmaxio |
Maximum I/O size in bytes |
vfs.nfsd.statehashsize |
Size of state hash table set via loader.conf |
vfs.nfsd.tcpcachetimeo |
Timeout for TCP entries in the DRC |
vfs.nfsd.tcphighwater |
High water mark for TCP cache entries |
vfs.nfsd.testing_disable_grace |
Disable grace for testing |
vfs.nfsd.threads |
Current number of threads |
vfs.nfsd.udphighwater |
High water mark for UDP cache entries |
vfs.nfsd.v4openaccess |
Enable Linux style NFSv4 Open access check |
vfs.nfsd.v4statelimit |
High water limit for NFSv4 opens+locks+delegations |
vfs.nfsd.writedelegifpos |
Issue a write delegation for read opens if possible |
vfs.nlm |
Network Lock Manager |
vfs.nlm.sysid |
|
vfs.notbufdflushes |
Number of dirty buffer flushes done by the bufdaemon helpers |
vfs.nullfs |
nullfs |
vfs.nullfs.cache_vnodes |
cache free nullfs vnodes |
vfs.numbufallocfails |
Number of times buffer allocations failed |
vfs.numdirtybuffers |
Number of buffers that are dirty (has unwritten changes) at the moment |
vfs.numfreebuffers |
Number of free buffers |
vfs.numvnodes |
Number of vnodes in existence (legacy) |
vfs.pfs |
pseudofs |
vfs.pfs.vncache |
pseudofs vnode cache |
vfs.pfs.vncache.entries |
number of entries in the vnode cache |
vfs.pfs.vncache.hits |
number of cache hits since initialization |
vfs.pfs.vncache.maxentries |
highest number of entries in the vnode cache |
vfs.pfs.vncache.misses |
number of cache misses since initialization |
vfs.read_max |
Cluster read-ahead max block count |
vfs.read_min |
Cluster read min block count |
vfs.recursive_forced_unmount |
Recursively unmount stacked upper mounts when a file system is forcibly unmounted |
vfs.recursiveflushes |
Number of flushes skipped due to being recursive |
vfs.recycles |
Number of vnodes recycled to meet vnode cache targets (legacy) |
vfs.recycles_free |
Number of free vnodes recycled to meet vnode cache targets (legacy) |
vfs.root_mount_always_wait |
Wait for root mount holds even if the root device already exists |
vfs.root_mount_hold |
List of root mount hold tokens |
vfs.runningbufspace |
Amount of presently outstanding async buffer io |
vfs.timestamp_precision |
File timestamp precision (0: seconds, 1: sec + ns accurate to 1/HZ, 2: sec + ns truncated to us, 3+: sec + ns (max. precision)) |
vfs.tmpfs |
tmpfs file system |
vfs.tmpfs.memory_percent |
Percent of available memory that can be used if no size limit |
vfs.tmpfs.memory_reserved |
Amount of available memory and swap below which tmpfs growth stops |
vfs.tmpfs.rename_restarts |
Times rename had to restart due to lock contention |
vfs.typenumhash |
Set vfc_typenum using a hash calculation on vfc_name, so that it does not change when file systems are loaded in a different order. |
vfs.ufs |
UFS filesystem |
vfs.ufs.dirhash_docheck |
enable extra sanity tests |
vfs.ufs.dirhash_lowmemcount |
number of times low memory hook called |
vfs.ufs.dirhash_maxmem |
maximum allowed dirhash memory usage |
vfs.ufs.dirhash_mem |
current dirhash memory usage |
vfs.ufs.dirhash_minsize |
minimum directory size in bytes for which to use hashed lookup |
vfs.ufs.dirhash_reclaimpercent |
set percentage of dirhash cache to be removed in low VM events |
vfs.ufs.rename_restarts |
Times rename had to restart due to lock contention |
vfs.unmapped_buf_allowed |
Permit the use of the unmapped i/o |
vfs.usermount |
Unprivileged users may mount and unmount file systems |
vfs.vmiodirenable |
Use the VM system for directory writes |
vfs.vnode |
vnode configuration and statistics |
vfs.vnode.param |
vnode configuration |
vfs.vnode.param.can_skip_requeue |
Is LRU requeue skippable |
vfs.vnode.param.limit |
Target for maximum number of vnodes |
vfs.vnode.param.wantfree |
Target for minimum number of "free" vnodes |
vfs.vnode.stats |
vnode statistics |
vfs.vnode.stats.alloc_sleeps |
Number of times vnode allocation blocked waiting on vnlru |
vfs.vnode.stats.count |
Number of vnodes in existence |
vfs.vnode.stats.created |
Number of vnodes created by getnewvnode |
vfs.vnode.stats.free |
Number of "free" vnodes |
vfs.vnode.stats.skipped_requeues |
Number of times LRU requeue was skipped due to lock contention |
vfs.vnode.vnlru |
vnode recycling |
vfs.vnode.vnlru.direct_recycles_free |
Number of free vnodes recycled by vn_alloc callers to meet vnode cache targets |
vfs.vnode.vnlru.failed_runs |
Number of times the vnlru process ran without success |
vfs.vnode.vnlru.kicks |
Number of times vnlru awakened due to vnode shortage |
vfs.vnode.vnlru.max_free_per_call |
limit on vnode free requests per call to the vnlru_free routine |
vfs.vnode.vnlru.recycles |
Number of vnodes recycled to meet vnode cache targets |
vfs.vnode.vnlru.recycles_free |
Number of free vnodes recycled to meet vnode cache targets |
vfs.vnode.vnlru.uma_reclaim_calls |
Number of calls to uma_reclaim |
vfs.vnodes_created |
Number of vnodes created by getnewvnode (legacy) |
vfs.wantfreevnodes |
Target for minimum number of "free" vnodes (legacy) |
vfs.worklist_len |
Syncer thread worklist length |
vfs.write_behind |
Cluster write-behind; 0: disable, 1: enable, 2: backed off |
vfs.zfs |
ZFS file system |
vfs.zfs.abd_scatter_enabled |
Enable scattered ARC data buffers |
vfs.zfs.abd_scatter_min_size |
Minimum size of scatter allocations. |
vfs.zfs.abort_size |
Minimal size of block to attempt early abort |
vfs.zfs.active_allocator |
SPA active allocator |
vfs.zfs.allow_redacted_dataset_mount |
Allow mounting of redacted datasets |
vfs.zfs.anon_data_esize |
size of evictable data in anonymous state |
vfs.zfs.anon_metadata_esize |
size of evictable metadata in anonymous state |
vfs.zfs.anon_size |
size of anonymous state |
vfs.zfs.arc |
ZFS adaptive replacement cache |
vfs.zfs.arc.average_blocksize |
Target average block size |
vfs.zfs.arc.dnode_limit |
Minimum bytes of dnodes in ARC |
vfs.zfs.arc.dnode_limit_percent |
Percent of ARC meta buffers for dnodes |
vfs.zfs.arc.dnode_reduce_percent |
Percentage of excess dnodes to try to unpin |
vfs.zfs.arc.evict_batch_limit |
The number of headers to evict per sublist before moving to the next |
vfs.zfs.arc.eviction_pct |
When full, ARC allocation waits for eviction of this % of alloc size |
vfs.zfs.arc.free_target |
Desired number of free pages below which ARC triggers reclaim |
vfs.zfs.arc.grow_retry |
Seconds before growing ARC size |
vfs.zfs.arc.lotsfree_percent |
System free memory I/O throttle in bytes |
vfs.zfs.arc.max |
Maximum ARC size in bytes |
vfs.zfs.arc.meta_balance |
Balance between metadata and data on ghost hits. |
vfs.zfs.arc.min |
Minimum ARC size in bytes |
vfs.zfs.arc.min_prefetch_ms |
Min life of prefetch block in ms |
vfs.zfs.arc.min_prescient_prefetch_ms |
Min life of prescient prefetched block in ms |
vfs.zfs.arc.no_grow_shift |
log2(fraction of ARC which must be free to allow growing) |
vfs.zfs.arc.pc_percent |
Percent of pagecache to reclaim ARC to |
vfs.zfs.arc.prune_task_threads |
Number of arc_prune threads |
vfs.zfs.arc.shrink_shift |
log2(fraction of ARC to reclaim) |
vfs.zfs.arc.sys_free |
System free memory target size in bytes |
vfs.zfs.arc_free_target |
Desired number of free pages below which ARC triggers reclaim (LEGACY) |
vfs.zfs.arc_max |
Maximum ARC size in bytes (LEGACY) |
vfs.zfs.arc_min |
Minimum ARC size in bytes (LEGACY) |
vfs.zfs.arc_no_grow_shift |
log2(fraction of ARC which must be free to allow growing) (LEGACY) |
vfs.zfs.async_block_max_blocks |
Max number of blocks freed in one txg |
vfs.zfs.autoimport_disable |
Disable pool import at module load |
vfs.zfs.bclone_enabled |
Enable block cloning |
vfs.zfs.bclone_wait_dirty |
Wait for dirty blocks when cloning |
vfs.zfs.blake3_impl |
Select BLAKE3 implementation. |
vfs.zfs.brt |
ZFS Block Reference Table |
vfs.zfs.brt.brt_zap_default_bs |
BRT ZAP leaf blockshift |
vfs.zfs.brt.brt_zap_default_ibs |
BRT ZAP indirect blockshift |
vfs.zfs.brt.brt_zap_prefetch |
Enable prefetching of BRT ZAP entries |
vfs.zfs.btree_verify_intensity |
Enable btree verification. Levels above 4 require ZFS be built with debugging |
vfs.zfs.ccw_retry_interval |
Configuration cache file write, retry after failure, interval (seconds) |
vfs.zfs.checksum_events_per_second |
Rate limit checksum events to this many checksum errors per second (do not set below ZED threshold). |
vfs.zfs.commit_timeout_pct |
ZIL block open timeout percentage |
vfs.zfs.compressed_arc_enabled |
Disable compressed ARC buffers |
vfs.zfs.condense |
ZFS condense |
vfs.zfs.condense.indirect_commit_entry_delay_ms |
Used by tests to ensure certain actions happen in the middle of a condense. A maximum value of 1 should be sufficient. |
vfs.zfs.condense.indirect_obsolete_pct |
Minimum obsolete percent of bytes in the mapping to attempt condensing |
vfs.zfs.condense.indirect_vdevs_enable |
Whether to attempt condensing indirect vdev mappings |
vfs.zfs.condense.max_obsolete_bytes |
Minimum size obsolete spacemap to attempt condensing |
vfs.zfs.condense.min_mapping_bytes |
Don't bother condensing if the mapping uses less than this amount of memory |
vfs.zfs.condense_pct |
Condense on-disk spacemap when it is more than this many percents of in-memory counterpart |
vfs.zfs.cpus_per_allocator |
Minimum number of CPUs per allocators |
vfs.zfs.crypt_sessions |
Number of cryptographic sessions created |
vfs.zfs.dbgmsg_enable |
Enable ZFS debug message log |
vfs.zfs.dbgmsg_maxsize |
Maximum ZFS debug log size |
vfs.zfs.dbuf |
ZFS disk buf cache |
vfs.zfs.dbuf.cache_shift |
Set size of dbuf cache to log2 fraction of arc size. |
vfs.zfs.dbuf.metadata_cache_max_bytes |
Maximum size in bytes of dbuf metadata cache. |
vfs.zfs.dbuf.metadata_cache_shift |
Set size of dbuf metadata cache to log2 fraction of arc size. |
vfs.zfs.dbuf.mutex_cache_shift |
Set size of dbuf cache mutex array as log2 shift. |
vfs.zfs.dbuf_cache |
ZFS disk buf cache |
vfs.zfs.dbuf_cache.hiwater_pct |
Percentage over dbuf_cache_max_bytes for direct dbuf eviction. |
vfs.zfs.dbuf_cache.lowater_pct |
Percentage below dbuf_cache_max_bytes when dbuf eviction stops. |
vfs.zfs.dbuf_cache.max_bytes |
Maximum size in bytes of the dbuf cache. |
vfs.zfs.dbuf_state_index |
Calculate arc header index |
vfs.zfs.ddt_data_is_special |
Place DDT data into the special class |
vfs.zfs.deadman |
ZFS deadman |
vfs.zfs.deadman.checktime_ms |
Dead I/O check interval in milliseconds |
vfs.zfs.deadman.enabled |
Enable deadman timer |
vfs.zfs.deadman.failmode |
Failmode for deadman timer |
vfs.zfs.deadman.synctime_ms |
Pool sync expiration time in milliseconds |
vfs.zfs.deadman.ziotime_ms |
IO expiration time in milliseconds |
vfs.zfs.deadman_events_per_second |
Rate limit hung IO (deadman) events to this many per second |
vfs.zfs.debug |
Debug level |
vfs.zfs.debugflags |
Debug flags for ZFS testing. |
vfs.zfs.dedup |
ZFS dedup |
vfs.zfs.dedup.ddt_zap_default_bs |
DDT ZAP leaf blockshift |
vfs.zfs.dedup.ddt_zap_default_ibs |
DDT ZAP indirect blockshift |
vfs.zfs.dedup.log_cap |
Soft cap for the size of the current dedup log |
vfs.zfs.dedup.log_flush_entries_max |
Max number of log entries to flush each transaction |
vfs.zfs.dedup.log_flush_entries_min |
Min number of log entries to flush each transaction |
vfs.zfs.dedup.log_flush_flow_rate_txgs |
Number of txgs to average flow rates across |
vfs.zfs.dedup.log_flush_min_time_ms |
Min time to spend on incremental dedup log flush each transaction |
vfs.zfs.dedup.log_flush_txgs |
Number of TXGs to try to rotate the log in |
vfs.zfs.dedup.log_hard_cap |
Whether to use the soft cap as a hard cap |
vfs.zfs.dedup.log_mem_max |
Max memory for dedup logs |
vfs.zfs.dedup.log_mem_max_percent |
Max memory for dedup logs, as % of total memory |
vfs.zfs.dedup.log_txg_max |
Max transactions before starting to flush dedup logs |
vfs.zfs.dedup.prefetch |
Enable prefetching dedup-ed blks |
vfs.zfs.default_bs |
Default dnode block shift |
vfs.zfs.default_ibs |
Default dnode indirect block shift |
vfs.zfs.delay_min_dirty_percent |
Transaction delay threshold |
vfs.zfs.delay_scale |
How quickly delay approaches infinity |
vfs.zfs.dio_enabled |
Enable Direct I/O |
vfs.zfs.dio_write_verify_events_per_second |
Rate Direct I/O write verify events to this many per second |
vfs.zfs.dirty_data_max |
Determines the dirty space limit |
vfs.zfs.dirty_data_max_max |
zfs_dirty_data_max upper bound in bytes |
vfs.zfs.dirty_data_max_max_percent |
zfs_dirty_data_max upper bound as % of RAM |
vfs.zfs.dirty_data_max_percent |
Max percent of RAM allowed to be dirty |
vfs.zfs.dirty_data_sync_percent |
Dirty data txg sync threshold as a percentage of zfs_dirty_data_max |
vfs.zfs.disable_ivset_guid_check |
Set to allow raw receives without IVset guids |
vfs.zfs.dmu_ddt_copies |
Override copies= for dedup objects |
vfs.zfs.dmu_object_alloc_chunk_shift |
CPU-specific allocator grabs 2^N objects at once |
vfs.zfs.dmu_offset_next_sync |
Enable forcing txg sync to find holes |
vfs.zfs.dmu_prefetch_max |
Limit one prefetch call to this size |
vfs.zfs.dtl_sm_blksz |
Block size for DTL space map. Power of 2 greater than 4096. |
vfs.zfs.earlyabort_pass |
Enable early abort attempts when using zstd |
vfs.zfs.embedded_slog_min_ms |
Minimum number of metaslabs required to dedicate one for log blocks |
vfs.zfs.flags |
Set additional debugging flags |
vfs.zfs.fletcher_4_impl |
Select fletcher 4 implementation. |
vfs.zfs.free_bpobj_enabled |
Enable processing of the free_bpobj |
vfs.zfs.free_leak_on_eio |
Set to ignore IO errors during free and permanently leak the space |
vfs.zfs.free_min_time_ms |
Min millisecs to free per txg |
vfs.zfs.history_output_max |
Maximum size in bytes of ZFS ioctl output that will be logged |
vfs.zfs.immediate_write_sz |
Largest data block to write to zil |
vfs.zfs.initialize_chunk_size |
Size in bytes of writes by zpool initialize |
vfs.zfs.initialize_value |
Value written during zpool initialize |
vfs.zfs.keep_log_spacemaps_at_export |
Prevent the log spacemaps from being flushed and destroyed during pool export/destroy |
vfs.zfs.l2arc |
ZFS l2arc |
vfs.zfs.l2arc.exclude_special |
Exclude dbufs on special vdevs from being cached to L2ARC if set. |
vfs.zfs.l2arc.feed_again |
Turbo L2ARC warmup |
vfs.zfs.l2arc.feed_min_ms |
Min feed interval in milliseconds |
vfs.zfs.l2arc.feed_secs |
Seconds between L2ARC writing |
vfs.zfs.l2arc.headroom |
Number of max device writes to precache |
vfs.zfs.l2arc.headroom_boost |
Compressed l2arc_headroom multiplier |
vfs.zfs.l2arc.meta_percent |
Percent of ARC size allowed for L2ARC-only headers |
vfs.zfs.l2arc.mfuonly |
Cache only MFU data from ARC into L2ARC |
vfs.zfs.l2arc.noprefetch |
Skip caching prefetched buffers |
vfs.zfs.l2arc.norw |
No reads during writes |
vfs.zfs.l2arc.rebuild_blocks_min_l2size |
Min size in bytes to write rebuild log blocks in L2ARC |
vfs.zfs.l2arc.rebuild_enabled |
Rebuild the L2ARC when importing a pool |
vfs.zfs.l2arc.trim_ahead |
TRIM ahead L2ARC write size multiplier |
vfs.zfs.l2arc.write_boost |
Extra write bytes during device warmup |
vfs.zfs.l2arc.write_max |
Max write bytes per interval |
vfs.zfs.l2arc_feed_again |
Turbo L2ARC warmup (LEGACY) |
vfs.zfs.l2arc_feed_min_ms |
Min feed interval in milliseconds (LEGACY) |
vfs.zfs.l2arc_feed_secs |
Seconds between L2ARC writing (LEGACY) |
vfs.zfs.l2arc_headroom |
Number of max device writes to precache (LEGACY) |
vfs.zfs.l2arc_headroom_boost |
Compressed l2arc_headroom multiplier (LEGACY) |
vfs.zfs.l2arc_noprefetch |
Skip caching prefetched buffers (LEGACY) |
vfs.zfs.l2arc_norw |
No reads during writes (LEGACY) |
vfs.zfs.l2arc_write_boost |
Extra write bytes during device warmup (LEGACY) |
vfs.zfs.l2arc_write_max |
Max write bytes per interval (LEGACY) |
vfs.zfs.l2c_only_size |
size of l2c_only state |
vfs.zfs.livelist |
ZFS livelist |
vfs.zfs.livelist.condense |
ZFS livelist condense |
vfs.zfs.livelist.condense.new_alloc |
Whether extra ALLOC blkptrs were added to a livelist entry while it was being condensed |
vfs.zfs.livelist.condense.sync_cancel |
Whether livelist condensing was canceled in the synctask |
vfs.zfs.livelist.condense.sync_pause |
Set the livelist condense synctask to pause |
vfs.zfs.livelist.condense.zthr_cancel |
Whether livelist condensing was canceled in the zthr function |
vfs.zfs.livelist.condense.zthr_pause |
Set the livelist condense zthr to pause |
vfs.zfs.livelist.max_entries |
Size to start the next sub-livelist in a livelist |
vfs.zfs.livelist.min_percent_shared |
Threshold at which livelist is disabled |
vfs.zfs.lua |
ZFS lua |
vfs.zfs.lua.max_instrlimit |
Max instruction limit that can be specified for a channel program |
vfs.zfs.lua.max_memlimit |
Max memory limit that can be specified for a channel program |
vfs.zfs.max_async_dedup_frees |
Max number of dedup blocks freed in one txg |
vfs.zfs.max_auto_ashift |
Max ashift used when optimizing for logical -> physical sector size on new top-level vdevs. (LEGACY) |
vfs.zfs.max_dataset_nesting |
Limit to the amount of nesting a path can have. Defaults to 50. |
vfs.zfs.max_log_walking |
The number of past TXGs that the flushing algorithm of the log spacemap feature uses to estimate incoming log blocks |
vfs.zfs.max_logsm_summary_length |
Maximum number of rows allowed in the summary of the spacemap log |
vfs.zfs.max_missing_tvds |
Allow importing pool with up to this number of missing top-level vdevs (in read-only mode) |
vfs.zfs.max_missing_tvds_cachefile |
Allow importing pools with missing top-level vdevs in cache file |
vfs.zfs.max_missing_tvds_scan |
Allow importing pools with missing top-level vdevs during scan |
vfs.zfs.max_nvlist_src_size |
Maximum size in bytes allowed for src nvlist passed with ZFS ioctls |
vfs.zfs.max_recordsize |
Max allowed record size |
vfs.zfs.metaslab |
ZFS metaslab |
vfs.zfs.metaslab.aliquot |
Allocation granularity (a.k.a. stripe size) |
vfs.zfs.metaslab.bias_enabled |
Enable space-based metaslab group biasing |
vfs.zfs.metaslab.debug_load |
Load all metaslabs when pool is first opened |
vfs.zfs.metaslab.debug_unload |
Prevent metaslabs from being unloaded |
vfs.zfs.metaslab.df_alloc_threshold |
Minimum size which forces the dynamic allocator to change its allocation strategy |
vfs.zfs.metaslab.df_free_pct |
The minimum free space, in percent, which must be available in a space map to continue allocations in a first-fit fashion |
vfs.zfs.metaslab.df_max_search |
Max distance (bytes) to search forward before using size tree |
vfs.zfs.metaslab.df_use_largest_segment |
When looking in size tree, use largest segment instead of exact fit |
vfs.zfs.metaslab.find_max_tries |
Normally only consider this many of the best metaslabs in each vdev |
vfs.zfs.metaslab.force_ganging |
Blocks larger than this size are sometimes forced to be gang blocks |
vfs.zfs.metaslab.force_ganging_pct |
Percentage of large blocks that will be forced to be gang blocks |
vfs.zfs.metaslab.fragmentation_factor_enabled |
Use the fragmentation metric to prefer less fragmented metaslabs |
vfs.zfs.metaslab.fragmentation_threshold |
Fragmentation for metaslab to allow allocation |
vfs.zfs.metaslab.lba_weighting_enabled |
Prefer metaslabs with lower LBAs |
vfs.zfs.metaslab.max_size_cache_sec |
How long to trust the cached max chunk size of a metaslab |
vfs.zfs.metaslab.mem_limit |
Percentage of memory that can be used to store metaslab range trees |
vfs.zfs.metaslab.perf_bias |
Enable performance-based metaslab group biasing |
vfs.zfs.metaslab.preload_enabled |
Preload potential metaslabs during reassessment |
vfs.zfs.metaslab.preload_limit |
Max number of metaslabs per group to preload |
vfs.zfs.metaslab.preload_pct |
Percentage of CPUs to run a metaslab preload taskq |
vfs.zfs.metaslab.segment_weight_enabled |
Enable segment-based metaslab selection |
vfs.zfs.metaslab.sm_blksz_no_log |
Block size for space map in pools with log space map disabled. Power of 2 greater than 4096. |
vfs.zfs.metaslab.sm_blksz_with_log |
Block size for space map in pools with log space map enabled. Power of 2 greater than 4096. |
vfs.zfs.metaslab.switch_threshold |
Segment-based metaslab selection maximum buckets before switching |
vfs.zfs.metaslab.try_hard_before_gang |
Try hard to allocate before ganging |
vfs.zfs.metaslab.unload_delay |
Delay in txgs after metaslab was last used before unloading |
vfs.zfs.metaslab.unload_delay_ms |
Delay in milliseconds after metaslab was last used before unloading |
vfs.zfs.mfu_data_esize |
size of evictable data in mfu state |
vfs.zfs.mfu_ghost_data_esize |
size of evictable data in mfu ghost state |
vfs.zfs.mfu_ghost_metadata_esize |
size of evictable metadata in mfu ghost state |
vfs.zfs.mfu_ghost_size |
size of mfu ghost state |
vfs.zfs.mfu_metadata_esize |
size of evictable metadata in mfu state |
vfs.zfs.mfu_size |
size of mfu state |
vfs.zfs.mg |
ZFS metaslab group |
vfs.zfs.mg.fragmentation_threshold |
Percentage of metaslab group size that should be considered eligible for allocations unless all metaslab groups within the metaslab class have also crossed this threshold |
vfs.zfs.mg.noalloc_threshold |
Percentage of metaslab group size that should be free to make it eligible for allocation |
vfs.zfs.min_auto_ashift |
Min ashift used when creating new top-level vdev. (LEGACY) |
vfs.zfs.min_metaslabs_to_flush |
Minimum number of metaslabs to flush per dirty TXG |
vfs.zfs.mru_data_esize |
size of evictable data in mru state |
vfs.zfs.mru_ghost_data_esize |
size of evictable data in mru ghost state |
vfs.zfs.mru_ghost_metadata_esize |
size of evictable metadata in mru ghost state |
vfs.zfs.mru_ghost_size |
size of mru ghost state |
vfs.zfs.mru_metadata_esize |
size of evictable metadata in mru state |
vfs.zfs.mru_size |
size of mru state |
vfs.zfs.multihost |
ZFS multihost protection |
vfs.zfs.multihost.fail_intervals |
Max allowed period without a successful mmp write |
vfs.zfs.multihost.history |
Historical statistics for last N multihost writes |
vfs.zfs.multihost.import_intervals |
Number of zfs_multihost_interval periods to wait for activity |
vfs.zfs.multihost.interval |
Milliseconds between mmp writes to each leaf |
vfs.zfs.multilist_num_sublists |
Number of sublists used in each multilist |
vfs.zfs.no_scrub_io |
Set to disable scrub I/O |
vfs.zfs.no_scrub_prefetch |
Set to disable scrub prefetching |
vfs.zfs.nocacheflush |
Disable cache flushes |
vfs.zfs.nopwrite_enabled |
Enable NOP writes |
vfs.zfs.num_allocators |
Number of allocators per spa |
vfs.zfs.obsolete_min_time_ms |
Min millisecs to obsolete per txg |
vfs.zfs.pd_bytes_max |
Max number of bytes to prefetch |
vfs.zfs.per_txg_dirty_frees_percent |
Percentage of dirtied blocks from frees in one TXG |
vfs.zfs.prefetch |
ZFS prefetch |
vfs.zfs.prefetch.disable |
Disable all ZFS prefetching |
vfs.zfs.prefetch.hole_shift |
Max log2 fraction of holes in a stream |
vfs.zfs.prefetch.max_distance |
Max bytes to prefetch per stream |
vfs.zfs.prefetch.max_idistance |
Max bytes to prefetch indirects for per stream |
vfs.zfs.prefetch.max_reorder |
Max request reorder distance within a stream |
vfs.zfs.prefetch.max_sec_reap |
Max time before stream delete |
vfs.zfs.prefetch.max_streams |
Max number of streams per zfetch |
vfs.zfs.prefetch.min_distance |
Min bytes to prefetch per stream |
vfs.zfs.prefetch.min_sec_reap |
Min time before stream reclaim |
vfs.zfs.read_history |
Historical statistics for the last N reads |
vfs.zfs.read_history_hits |
Include cache hits in read history |
vfs.zfs.rebuild_max_segment |
Max segment size in bytes of rebuild reads |
vfs.zfs.rebuild_scrub_enabled |
Automatically scrub after sequential resilver completes |
vfs.zfs.rebuild_vdev_limit |
Max bytes in flight per leaf vdev for sequential resilvers |
vfs.zfs.reconstruct |
ZFS reconstruct |
vfs.zfs.reconstruct.indirect_combinations_max |
Maximum number of combinations when reconstructing split segments |
vfs.zfs.recover |
Set to attempt to recover from fatal errors |
vfs.zfs.recv |
ZFS receive |
vfs.zfs.recv.best_effort_corrective |
Ignore errors during corrective receive |
vfs.zfs.recv.queue_ff |
Receive queue fill fraction |
vfs.zfs.recv.queue_length |
Maximum receive queue length |
vfs.zfs.recv.write_batch_size |
Maximum amount of writes to batch into one transaction |
vfs.zfs.removal_suspend_progress |
Ensures certain actions can happen while in the middle of a removal |
vfs.zfs.remove_max_segment |
Largest contiguous segment ZFS will attempt to allocate when removing a device |
vfs.zfs.resilver_defer_percent |
Issued IO percent complete after which resilvers are deferred |
vfs.zfs.resilver_disable_defer |
Process all resilvers immediately |
vfs.zfs.resilver_min_time_ms |
Min millisecs to resilver per txg |
vfs.zfs.scan_blkstats |
Enable block statistics calculation during scrub |
vfs.zfs.scan_checkpoint_intval |
Scan progress on-disk checkpointing interval |
vfs.zfs.scan_fill_weight |
Tunable to adjust bias towards more filled segments during scans |
vfs.zfs.scan_ignore_errors |
Ignore errors during resilver/scrub |
vfs.zfs.scan_issue_strategy |
IO issuing strategy during scrubbing. 0 = default, 1 = LBA, 2 = size |
vfs.zfs.scan_legacy |
Scrub using legacy non-sequential method |
vfs.zfs.scan_max_ext_gap |
Max gap in bytes between sequential scrub / resilver I/Os |
vfs.zfs.scan_mem_lim_fact |
Fraction of RAM for scan hard limit |
vfs.zfs.scan_mem_lim_soft_fact |
Fraction of hard limit used as soft limit |
vfs.zfs.scan_report_txgs |
Tunable to report resilver performance over the last N txgs |
vfs.zfs.scan_strict_mem_lim |
Tunable to attempt to reduce lock contention |
vfs.zfs.scan_suspend_progress |
Set to prevent scans from progressing |
vfs.zfs.scan_vdev_limit |
Max bytes in flight per leaf vdev for scrubs and resilvers |
vfs.zfs.scrub_after_expand |
For expanded RAIDZ, automatically start a pool scrub when expansion completes |
vfs.zfs.scrub_error_blocks_per_txg |
Error blocks to be scrubbed in one txg |
vfs.zfs.scrub_min_time_ms |
Min millisecs to scrub per txg |
vfs.zfs.send |
ZFS send |
vfs.zfs.send.corrupt_data |
Allow sending corrupt data |
vfs.zfs.send.no_prefetch_queue_ff |
Send queue fill fraction for non-prefetch queues |
vfs.zfs.send.no_prefetch_queue_length |
Maximum send queue length for non-prefetch queues |
vfs.zfs.send.override_estimate_recordsize |
Override block size estimate with fixed size |
vfs.zfs.send.queue_ff |
Send queue fill fraction |
vfs.zfs.send.queue_length |
Maximum send queue length |
vfs.zfs.send.unmodified_spill_blocks |
Send unmodified spill blocks |
vfs.zfs.send_holes_without_birth_time |
Ignore hole_birth txg for zfs send |
vfs.zfs.sha256_impl |
Select SHA256 implementation. |
vfs.zfs.sha512_impl |
Select SHA512 implementation. |
vfs.zfs.slow_io_events_per_second |
Rate limit slow IO (delay) events to this many per second |
vfs.zfs.snapshot_history_enabled |
Include snapshot events in pool history/events |
vfs.zfs.spa |
ZFS space allocation |
vfs.zfs.spa.asize_inflation |
SPA size estimate multiplication factor |
vfs.zfs.spa.discard_memory_limit |
Limit for memory used in prefetching the checkpoint space map done on each vdev while discarding the checkpoint |
vfs.zfs.spa.load_print_vdev_tree |
Print vdev tree to zfs_dbgmsg during pool import |
vfs.zfs.spa.load_verify_data |
Set to traverse data on pool import |
vfs.zfs.spa.load_verify_metadata |
Set to traverse metadata on pool import |
vfs.zfs.spa.load_verify_shift |
log2 fraction of arc that can be used by inflight I/Os when verifying pool during import |
vfs.zfs.spa.slop_shift |
Reserved free space in pool |
vfs.zfs.spa.upgrade_errlog_limit |
Limit the number of errors which will be upgraded to the new on-disk error log when enabling head_errlog |
vfs.zfs.space_map_ibs |
Space map indirect block shift |
vfs.zfs.special_class_metadata_reserve_pct |
Small file blocks in special vdevs depends on this much free space available |
vfs.zfs.standard_sm_blksz |
Block size for standard space map. Power of 2 greater than 4096. |
vfs.zfs.super_owner |
File system owners can perform privileged operation on file systems |
vfs.zfs.sync_pass_deferred_free |
Defer frees starting in this pass |
vfs.zfs.sync_pass_dont_compress |
Don't compress starting in this pass |
vfs.zfs.sync_pass_rewrite |
Rewrite new bps starting in this pass |
vfs.zfs.sync_taskq_batch_pct |
Max percent of CPUs that are used to sync dirty data |
vfs.zfs.top_maxinflight |
The maximum number of I/Os of all types active for each device. (LEGACY) |
vfs.zfs.traverse_indirect_prefetch_limit |
Traverse prefetch number of blocks pointed by indirect block |
vfs.zfs.trim |
ZFS TRIM |
vfs.zfs.trim.extent_bytes_max |
Max size of TRIM commands, larger will be split |
vfs.zfs.trim.extent_bytes_min |
Min size of TRIM commands, smaller will be skipped |
vfs.zfs.trim.metaslab_skip |
Skip metaslabs which have never been initialized |
vfs.zfs.trim.queue_limit |
Max queued TRIMs outstanding per leaf vdev |
vfs.zfs.trim.txg_batch |
Min number of txgs to aggregate frees before issuing TRIM |
vfs.zfs.txg |
ZFS transaction group |
vfs.zfs.txg.history |
Historical statistics for the last N txgs |
vfs.zfs.txg.timeout |
Max seconds worth of delta per txg |
vfs.zfs.uncached_data_esize |
size of evictable data in uncached state |
vfs.zfs.uncached_metadata_esize |
size of evictable metadata in uncached state |
vfs.zfs.uncached_size |
size of uncached state |
vfs.zfs.unflushed_log_block_max |
Hard limit (upper-bound) in the size of the space map log in terms of blocks. |
vfs.zfs.unflushed_log_block_min |
Lower-bound limit for the maximum amount of blocks allowed in log spacemap (see zfs_unflushed_log_block_max) |
vfs.zfs.unflushed_log_block_pct |
Tunable used to determine the number of blocks that can be used for the spacemap log, expressed as a percentage of the total number of metaslabs in the pool (e.g. 400 means the number of log blocks is capped at 4 times the number of metaslabs) |
vfs.zfs.unflushed_log_txg_max |
Hard limit (upper-bound) in the size of the space map log in terms of dirty TXGs. |
vfs.zfs.unflushed_max_mem_amt |
Specific hard-limit in memory that ZFS allows to be used for unflushed changes |
vfs.zfs.unflushed_max_mem_ppm |
Percentage of the overall system memory that ZFS allows to be used for unflushed changes (value is calculated over 1000000 for finer granularity) |
vfs.zfs.user_indirect_is_special |
Place user data indirect blocks into the special class |
vfs.zfs.validate_skip |
Enable to bypass vdev_validate(). |
vfs.zfs.vdev |
ZFS VDEV |
vfs.zfs.vdev.aggregation_limit |
Max vdev I/O aggregation size |
vfs.zfs.vdev.aggregation_limit_non_rotating |
Max vdev I/O aggregation size for non-rotating media |
vfs.zfs.vdev.async_read_max_active |
Max active async read I/Os per vdev |
vfs.zfs.vdev.async_read_min_active |
Min active async read I/Os per vdev |
vfs.zfs.vdev.async_write_active_max_dirty_percent |
Async write concurrency max threshold |
vfs.zfs.vdev.async_write_active_min_dirty_percent |
Async write concurrency min threshold |
vfs.zfs.vdev.async_write_max_active |
Max active async write I/Os per vdev |
vfs.zfs.vdev.async_write_min_active |
Min active async write I/Os per vdev |
vfs.zfs.vdev.bio_delete_disable |
Disable BIO_DELETE |
vfs.zfs.vdev.bio_flush_disable |
Disable BIO_FLUSH |
vfs.zfs.vdev.cache |
ZFS VDEV Cache |
vfs.zfs.vdev.def_queue_depth |
Default queue depth for each allocator |
vfs.zfs.vdev.default_ms_count |
Target number of metaslabs per top-level vdev |
vfs.zfs.vdev.default_ms_shift |
Default lower limit for metaslab size |
vfs.zfs.vdev.direct_write_verify |
Direct I/O writes will perform for checksum verification before commiting write |
vfs.zfs.vdev.expand_max_copy_bytes |
Max amount of concurrent i/o for RAIDZ expansion |
vfs.zfs.vdev.expand_max_reflow_bytes |
For testing, pause RAIDZ expansion after reflowing this many bytes |
vfs.zfs.vdev.file |
ZFS VDEV file |
vfs.zfs.vdev.file.logical_ashift |
Logical ashift for file-based devices |
vfs.zfs.vdev.file.physical_ashift |
Physical ashift for file-based devices |
vfs.zfs.vdev.initializing_max_active |
Max active initializing I/Os per vdev |
vfs.zfs.vdev.initializing_min_active |
Min active initializing I/Os per vdev |
vfs.zfs.vdev.io_aggregate_rows |
For expanded RAIDZ, aggregate reads that have more rows than this |
vfs.zfs.vdev.max_active |
Maximum number of active I/Os per vdev |
vfs.zfs.vdev.max_auto_ashift |
Maximum ashift used when optimizing for logical -> physical sector size on new top-level vdevs |
vfs.zfs.vdev.max_ms_shift |
Default upper limit for metaslab size |
vfs.zfs.vdev.min_auto_ashift |
Minimum ashift used when creating new top-level vdevs |
vfs.zfs.vdev.min_ms_count |
Minimum number of metaslabs per top-level vdev |
vfs.zfs.vdev.mirror |
ZFS VDEV mirror |
vfs.zfs.vdev.mirror.non_rotating_inc |
Non-rotating media load increment for non-seeking I/Os |
vfs.zfs.vdev.mirror.non_rotating_seek_inc |
Non-rotating media load increment for seeking I/Os |
vfs.zfs.vdev.mirror.rotating_inc |
Rotating media load increment for non-seeking I/Os |
vfs.zfs.vdev.mirror.rotating_seek_inc |
Rotating media load increment for seeking I/Os |
vfs.zfs.vdev.mirror.rotating_seek_offset |
Offset in bytes from the last I/O which triggers a reduced rotating media seek increment |
vfs.zfs.vdev.ms_count_limit |
Practical upper limit of total metaslabs per top-level vdev |
vfs.zfs.vdev.nia_credit |
Number of non-interactive I/Os to allow in sequence |
vfs.zfs.vdev.nia_delay |
Number of non-interactive I/Os before _max_active |
vfs.zfs.vdev.queue_depth_pct |
Queue depth percentage for each top-level vdev |
vfs.zfs.vdev.raidz_impl |
RAIDZ implementation |
vfs.zfs.vdev.read_gap_limit |
Aggregate read I/O over gap |
vfs.zfs.vdev.rebuild_max_active |
Max active rebuild I/Os per vdev |
vfs.zfs.vdev.rebuild_min_active |
Min active rebuild I/Os per vdev |
vfs.zfs.vdev.removal_ignore_errors |
Ignore hard IO errors when removing device |
vfs.zfs.vdev.removal_max_active |
Max active removal I/Os per vdev |
vfs.zfs.vdev.removal_max_span |
Largest span of free chunks a remap segment can span |
vfs.zfs.vdev.removal_min_active |
Min active removal I/Os per vdev |
vfs.zfs.vdev.removal_suspend_progress |
Pause device removal after this many bytes are copied (debug use only - causes removal to hang) |
vfs.zfs.vdev.remove_max_segment |
Largest contiguous segment to allocate when removing device |
vfs.zfs.vdev.scrub_max_active |
Max active scrub I/Os per vdev |
vfs.zfs.vdev.scrub_min_active |
Min active scrub I/Os per vdev |
vfs.zfs.vdev.sync_read_max_active |
Max active sync read I/Os per vdev |
vfs.zfs.vdev.sync_read_min_active |
Min active sync read I/Os per vdev |
vfs.zfs.vdev.sync_write_max_active |
Max active sync write I/Os per vdev |
vfs.zfs.vdev.sync_write_min_active |
Min active sync write I/Os per vdev |
vfs.zfs.vdev.trim_max_active |
Max active trim/discard I/Os per vdev |
vfs.zfs.vdev.trim_min_active |
Min active trim/discard I/Os per vdev |
vfs.zfs.vdev.validate_skip |
Bypass vdev_validate() |
vfs.zfs.vdev.write_gap_limit |
Aggregate write I/O over gap |
vfs.zfs.version |
ZFS versions |
vfs.zfs.version.acl |
ZFS_ACL_VERSION |
vfs.zfs.version.ioctl |
ZFS_IOCTL_VERSION |
vfs.zfs.version.module |
OpenZFS module version |
vfs.zfs.version.spa |
SPA_VERSION |
vfs.zfs.version.zpl |
ZPL_VERSION |
vfs.zfs.vnops |
ZFS VNOPS |
vfs.zfs.vnops.read_chunk_size |
Bytes to read per chunk |
vfs.zfs.vol |
ZFS VOLUME |
vfs.zfs.vol.mode |
Expose as GEOM providers (1), device files (2) or neither |
vfs.zfs.vol.recursive |
Allow zpools to use zvols as vdevs (DANGEROUS) |
vfs.zfs.vol.unmap_enabled |
Enable UNMAP functionality |
vfs.zfs.wrlog_data_max |
The size limit of write-transaction zil log data |
vfs.zfs.xattr_compat |
Use legacy ZFS xattr naming for writing new user namespace xattrs |
vfs.zfs.zap_iterate_prefetch |
When iterating ZAP object, prefetch it |
vfs.zfs.zap_micro_max_size |
Maximum micro ZAP size before converting to a fat ZAP, in bytes (max 1M) |
vfs.zfs.zap_shrink_enabled |
Enable ZAP shrinking |
vfs.zfs.zevent |
ZFS event |
vfs.zfs.zevent.len_max |
Max event queue length |
vfs.zfs.zevent.retain_expire_secs |
Expiration time for recent zevents records |
vfs.zfs.zevent.retain_max |
Maximum recent zevents records to retain for duplicate checking |
vfs.zfs.zfetch |
ZFS ZFETCH (LEGACY) |
vfs.zfs.zfetch.max_distance |
Max bytes to prefetch per stream (LEGACY) |
vfs.zfs.zfetch.max_idistance |
Max bytes to prefetch indirects for per stream (LEGACY) |
vfs.zfs.zil |
ZFS ZIL |
vfs.zfs.zil.clean_taskq_maxalloc |
Max number of taskq entries that are cached |
vfs.zfs.zil.clean_taskq_minalloc |
Number of taskq entries that are pre-populated |
vfs.zfs.zil.clean_taskq_nthr_pct |
Max percent of CPUs that are used per dp_sync_taskq |
vfs.zfs.zil.maxblocksize |
Limit in bytes of ZIL log block size |
vfs.zfs.zil.maxcopied |
Limit in bytes WR_COPIED size |
vfs.zfs.zil.nocacheflush |
Disable ZIL cache flushes |
vfs.zfs.zil.replay_disable |
Disable intent logging replay |
vfs.zfs.zil.slog_bulk |
Limit in bytes slog sync writes per commit |
vfs.zfs.zil_saxattr |
Disable xattr=sa extended attribute logging in ZIL by settng 0. |
vfs.zfs.zio |
ZFS ZIO |
vfs.zfs.zio.deadman_log_all |
Log all slow ZIOs, not just those with vdevs |
vfs.zfs.zio.dva_throttle_enabled |
Throttle block allocations in the ZIO pipeline |
vfs.zfs.zio.exclude_metadata |
Exclude metadata buffers from dumps as well |
vfs.zfs.zio.requeue_io_start_cut_in_line |
Prioritize requeued I/O |
vfs.zfs.zio.slow_io_ms |
Max I/O completion time (milliseconds) before marking it as slow |
vfs.zfs.zio.taskq_batch_pct |
Percentage of CPUs to run an IO worker thread |
vfs.zfs.zio.taskq_batch_tpq |
Number of threads per IO worker taskqueue |
vfs.zfs.zio.taskq_read |
Configure IO queues for read IO |
vfs.zfs.zio.taskq_write |
Configure IO queues for write IO |
vfs.zfs.zio.taskq_write_tpq |
Number of CPUs per write issue taskq |
vfs.zfs.zvol_enforce_quotas |
Enable strict ZVOL quota enforcment |
vm |
Virtual memory |
vm.act_scan_laundry_weight |
weight given to clean vs. dirty pages in active queue scans |
vm.aslr_restarts |
Number of aslr failures |
vm.background_launder_max |
background laundering cap, in kilobytes |
vm.background_launder_rate |
background laundering rate, in kilobytes per second |
vm.cluster_anon |
Cluster anonymous mappings: 0 = no, 1 = yes if no hint, 2 = always |
vm.debug |
Memory allocation debugging |
vm.debug.divisor |
Debug & thrash every this item in memory allocator |
vm.debug.skipped |
memory items skipped, not debugged |
vm.debug.trashed |
memory items debugged |
vm.debug.uma_multipage_slabs |
UMA may choose larger slab sizes for better efficiency |
vm.disable_swapspace_pageouts |
Disallow swapout of dirty pages |
vm.dmmax |
Maximum size of a swap block in pages |
vm.domain |
|
vm.domain.[num] |
|
vm.domain.[num].pageout_helper_threads_enabled |
Enable multi-threaded inactive queue scanning |
vm.domain.[num].pidctrl |
|
vm.domain.[num].pidctrl.bound |
Integral wind-up limit |
vm.domain.[num].pidctrl.derivative |
Error derivative (D) |
vm.domain.[num].pidctrl.error |
Current difference from setpoint value (P) |
vm.domain.[num].pidctrl.input |
Last controller process variable input |
vm.domain.[num].pidctrl.integral |
Accumulated error integral (I) |
vm.domain.[num].pidctrl.interval |
Interval between calculations (ticks) |
vm.domain.[num].pidctrl.kdd |
Inverse of derivative gain |
vm.domain.[num].pidctrl.kid |
Inverse of integral gain |
vm.domain.[num].pidctrl.kpd |
Inverse of proportional gain |
vm.domain.[num].pidctrl.olderror |
Error value from last interval |
vm.domain.[num].pidctrl.output |
Last controller output |
vm.domain.[num].pidctrl.setpoint |
Desired level for process variable |
vm.domain.[num].pidctrl.ticks |
Last controller runtime |
vm.domain.[num].stats |
|
vm.domain.[num].stats.active |
Active pages |
vm.domain.[num].stats.actpdpgs |
Active pages scanned by the page daemon |
vm.domain.[num].stats.free_count |
Free pages |
vm.domain.[num].stats.free_min |
Minimum free pages |
vm.domain.[num].stats.free_reserved |
Reserved free pages |
vm.domain.[num].stats.free_severe |
Severe free pages |
vm.domain.[num].stats.free_target |
Target free pages |
vm.domain.[num].stats.inactive |
Inactive pages |
vm.domain.[num].stats.inactive_pps |
inactive pages freed/second |
vm.domain.[num].stats.inactive_target |
Target inactive pages |
vm.domain.[num].stats.inactpdpgs |
Inactive pages scanned by the page daemon |
vm.domain.[num].stats.laundpdpgs |
Laundry pages scanned by the page daemon |
vm.domain.[num].stats.laundry |
laundry pages |
vm.domain.[num].stats.unswappable |
Unswappable pages |
vm.domain.[num].stats.unswppdpgs |
Unswappable pages scanned by the page daemon |
vm.imply_prot_max |
Imply maximum page protections in mmap() when none are specified |
vm.kmem_map_free |
Free space in kmem |
vm.kmem_map_size |
Current kmem allocation size |
vm.kmem_size |
Size of kernel memory |
vm.kmem_size_max |
Maximum size of kernel memory |
vm.kmem_size_min |
Minimum size of kernel memory |
vm.kmem_size_scale |
Scale factor for kernel memory size |
vm.kmem_zmax |
Maximum allocation size that malloc(9) would use UMA as backend |
vm.kstack_cache_size |
Maximum number of cached kernel stacks |
vm.kvm_free |
Amount of KVM free |
vm.kvm_size |
Size of KVM |
vm.largepages |
|
vm.largepages.1G |
number of non-transient largepages allocated |
vm.largepages.2M |
number of non-transient largepages allocated |
vm.largepages.reclaim_tries |
Number of contig reclaims before giving up for default alloc policy |
vm.loadavg |
Machine loadaverage history |
vm.lowmem_period |
Low memory callback period |
vm.malloc |
Malloc information |
vm.malloc.zone_count |
Number of malloc zones |
vm.malloc.zone_sizes |
Zone sizes used by malloc |
vm.max_kernel_address |
Max kernel address |
vm.max_user_wired |
system-wide limit to user-wired page count |
vm.md_malloc_wait |
Allow malloc to wait for memory allocations |
vm.min_kernel_address |
Min kernel address |
vm.mincore_mapped |
mincore reports mappings, not residency |
vm.ndomains |
Number of physical memory domains available. |
vm.nswapdev |
Number of swap devices |
vm.numa |
NUMA options |
vm.numa.disabled |
NUMA-awareness in the allocators is disabled |
vm.objects |
List of VM objects |
vm.old_mlock |
Do not apply RLIMIT_MEMLOCK on mlockall |
vm.old_msync |
Use old (insecure) msync behavior |
vm.oom_pf_secs |
|
vm.overcommit |
Configure virtual memory overcommit behavior. See tuning(7) for details. |
vm.page_blacklist |
Blacklist pages |
vm.pageout_cpus_per_thread |
Number of CPUs per pagedaemon worker thread |
vm.pageout_lock_miss |
vget() lock misses during pageout |
vm.pageout_oom_seq |
back-to-back calls to oom detector to start OOM |
vm.pageout_update_period |
Maximum active LRU update period |
vm.panic_on_oom |
Panic on the given number of out-of-memory errors instead of killing the largest process |
vm.pfault_oom_attempts |
Number of page allocation attempts in page fault handler before it triggers OOM handling |
vm.pfault_oom_wait |
Number of seconds to wait for free pages before retrying the page fault handler |
vm.pgcache_zone_max_pcpu |
Per-CPU page cache size |
vm.phys_free |
Phys Free Info |
vm.phys_locality |
Phys Locality Info |
vm.phys_pager_cluster |
prefault window size for phys pager |
vm.phys_segs |
Phys Seg Info |
vm.pmap |
VM/pmap parameters |
vm.pmap.ad_emulation_superpage_promotions |
|
vm.pmap.allow_2m_x_ept |
Allow executable superpage mappings in EPT |
vm.pmap.di_locked |
Locked delayed invalidation |
vm.pmap.invlpgb_works |
Is the invlpgb instruction available? |
vm.pmap.invpcid_works |
Is the invpcid instruction available ? |
vm.pmap.kernel_maps |
Dump kernel address layout |
vm.pmap.kernel_pt_page_count |
Current number of allocated page table pages for the kernel |
vm.pmap.la57 |
5-level paging for host is enabled |
vm.pmap.large_map_pml4_entries |
Maximum number of PML4 entries for use by large map (tunable). Each entry corresponds to 512GB of address space. |
vm.pmap.num_accessed_emulations |
|
vm.pmap.num_dirty_emulations |
|
vm.pmap.num_superpage_accessed_emulations |
|
vm.pmap.pcid_enabled |
Is TLB Context ID enabled ? |
vm.pmap.pcid_invlpg_workaround |
Enable small core PCID/INVLPG workaround |
vm.pmap.pcid_save_cnt |
Count of saved TLB context on switch |
vm.pmap.pde |
2MB page mapping counters |
vm.pmap.pde.demotions |
2MB page demotions |
vm.pmap.pde.mappings |
2MB page mappings |
vm.pmap.pde.p_failures |
2MB page promotion failures |
vm.pmap.pde.promotions |
2MB page promotions |
vm.pmap.pdpe |
1GB page mapping counters |
vm.pmap.pdpe.demotions |
1GB page demotions |
vm.pmap.pg_ps_enabled |
Are large page mappings enabled? |
vm.pmap.prefer_uva_la48 |
Userspace maps are limited to LA48 unless otherwise configured |
vm.pmap.pti |
Page Table Isolation enabled |
vm.pmap.pv_page_count |
Current number of allocated pv pages |
vm.pmap.user_pt_page_count |
Current number of allocated page table pages for userspace |
vm.reserv |
Reservation Info |
vm.reserv.broken |
Cumulative number of broken reservations |
vm.reserv.freed |
Cumulative number of freed reservations |
vm.reserv.fullpop |
Current number of full reservations |
vm.reserv.partpopq |
Partially populated reservation queues |
vm.reserv.reclaimed |
Cumulative number of reclaimed reservations |
vm.stats |
VM meter stats |
vm.stats.misc |
VM meter misc stats |
vm.stats.object |
VM object stats |
vm.stats.object.bypasses |
VM object bypasses |
vm.stats.object.collapse_waits |
Number of sleeps for collapse |
vm.stats.object.collapses |
VM object collapses |
vm.stats.page |
VM page statistics |
vm.stats.page.pqstate_commit_retries |
Number of failed per-page atomic queue state updates |
vm.stats.page.queue_nops |
Number of batched queue operations with no effects |
vm.stats.page.queue_ops |
Number of batched queue operations |
vm.stats.swap |
VM swap stats |
vm.stats.swap.free_completed |
Number of deferred frees completed |
vm.stats.swap.free_deferred |
Number of pages that deferred freeing swap space |
vm.stats.sys |
VM meter sys stats |
vm.stats.sys.v_intr |
Device interrupts |
vm.stats.sys.v_soft |
Software interrupts |
vm.stats.sys.v_swtch |
Context switches |
vm.stats.sys.v_syscall |
System calls |
vm.stats.sys.v_trap |
Traps |
vm.stats.vm |
VM meter vm stats |
vm.stats.vm.v_active_count |
Active pages |
vm.stats.vm.v_cache_count |
Dummy for compatibility |
vm.stats.vm.v_cow_faults |
Copy-on-write faults |
vm.stats.vm.v_cow_optim |
Optimized COW faults |
vm.stats.vm.v_dfree |
Pages freed by pagedaemon |
vm.stats.vm.v_forkpages |
VM pages affected by fork() |
vm.stats.vm.v_forks |
Number of fork() calls |
vm.stats.vm.v_free_count |
Free pages |
vm.stats.vm.v_free_min |
Minimum low-free-pages threshold |
vm.stats.vm.v_free_reserved |
Pages reserved for deadlock |
vm.stats.vm.v_free_severe |
Severe page depletion point |
vm.stats.vm.v_free_target |
Pages desired free |
vm.stats.vm.v_inactive_count |
Inactive pages |
vm.stats.vm.v_inactive_target |
Desired inactive pages |
vm.stats.vm.v_interrupt_free_min |
Reserved pages for interrupt code |
vm.stats.vm.v_intrans |
In transit page faults |
vm.stats.vm.v_io_faults |
Page faults requiring I/O |
vm.stats.vm.v_kthreadpages |
VM pages affected by fork() by kernel |
vm.stats.vm.v_kthreads |
Number of fork() calls by kernel |
vm.stats.vm.v_laundry_count |
Pages eligible for laundering |
vm.stats.vm.v_nofree_count |
Permanently allocated pages |
vm.stats.vm.v_ozfod |
Optimized zero fill pages |
vm.stats.vm.v_page_count |
Total number of pages in system |
vm.stats.vm.v_page_size |
Page size in bytes |
vm.stats.vm.v_pageout_free_min |
Min pages reserved for kernel |
vm.stats.vm.v_pdpages |
Pages analyzed by pagedaemon |
vm.stats.vm.v_pdshortfalls |
Page reclamation shortfalls |
vm.stats.vm.v_pdwakeups |
Pagedaemon wakeups |
vm.stats.vm.v_pfree |
Pages freed by exiting processes |
vm.stats.vm.v_reactivated |
Pages reactivated by pagedaemon |
vm.stats.vm.v_rforkpages |
VM pages affected by rfork() |
vm.stats.vm.v_rforks |
Number of rfork() calls |
vm.stats.vm.v_swapin |
Swap pager pageins |
vm.stats.vm.v_swapout |
Swap pager pageouts |
vm.stats.vm.v_swappgsin |
Swap pages swapped in |
vm.stats.vm.v_swappgsout |
Swap pages swapped out |
vm.stats.vm.v_tcached |
Dummy for compatibility |
vm.stats.vm.v_tfree |
Total pages freed |
vm.stats.vm.v_user_wire_count |
User-wired virtual memory |
vm.stats.vm.v_vforkpages |
VM pages affected by vfork() |
vm.stats.vm.v_vforks |
Number of vfork() calls |
vm.stats.vm.v_vm_faults |
Address memory faults |
vm.stats.vm.v_vnodein |
Vnode pager pageins |
vm.stats.vm.v_vnodeout |
Vnode pager pageouts |
vm.stats.vm.v_vnodepgsin |
Vnode pages paged in |
vm.stats.vm.v_vnodepgsout |
Vnode pages paged out |
vm.stats.vm.v_wire_count |
Wired pages |
vm.stats.vm.v_zfod |
Pages zero-filled on demand |
vm.swap_async_max |
Maximum running async swap ops |
vm.swap_enabled |
Enable entire process swapout |
vm.swap_fragmentation |
Swap Fragmentation Info |
vm.swap_idle_enabled |
Allow swapout on idle criteria |
vm.swap_idle_threshold1 |
Guaranteed swapped in time for a process |
vm.swap_idle_threshold2 |
Time before a process will be swapped out |
vm.swap_info |
Swap statistics by device |
vm.swap_maxpages |
Maximum amount of swap supported |
vm.swap_objects |
List of swap VM objects |
vm.swap_reserved |
Amount of swap storage needed to back all allocated anonymous memory. |
vm.swap_total |
Total amount of available swap storage. |
vm.swzone |
Actual size of swap metadata zone |
vm.uma |
Universal Memory Allocator |
vm.uma.AIO |
|
vm.uma.AIO.bucket_size |
Desired per-cpu cache size |
vm.uma.AIO.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.AIO.domain |
|
vm.uma.AIO.domain.[num] |
|
vm.uma.AIO.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.AIO.domain.[num].imax |
maximum item count in this period |
vm.uma.AIO.domain.[num].imin |
minimum item count in this period |
vm.uma.AIO.domain.[num].limin |
Long time minimum item count |
vm.uma.AIO.domain.[num].nitems |
number of items in this domain |
vm.uma.AIO.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.AIO.domain.[num].wss |
Working set size |
vm.uma.AIO.flags |
Allocator configuration flags |
vm.uma.AIO.keg |
|
vm.uma.AIO.keg.align |
item alignment mask |
vm.uma.AIO.keg.domain |
|
vm.uma.AIO.keg.domain.[num] |
|
vm.uma.AIO.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.AIO.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.AIO.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.AIO.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.AIO.keg.ipers |
items available per-slab |
vm.uma.AIO.keg.name |
Keg name |
vm.uma.AIO.keg.ppera |
pages per-slab allocation |
vm.uma.AIO.keg.reserve |
number of reserved items |
vm.uma.AIO.keg.rsize |
Real object size with alignment |
vm.uma.AIO.limit |
|
vm.uma.AIO.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.AIO.limit.items |
Current number of allocated items if limit is set |
vm.uma.AIO.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.AIO.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.AIO.limit.sleeps |
Total zone limit sleeps |
vm.uma.AIO.size |
Allocation size |
vm.uma.AIO.stats |
|
vm.uma.AIO.stats.allocs |
Total allocation calls |
vm.uma.AIO.stats.current |
Current number of allocated items |
vm.uma.AIO.stats.fails |
Number of allocation failures |
vm.uma.AIO.stats.frees |
Total free calls |
vm.uma.AIO.stats.xdomain |
Free calls from the wrong domain |
vm.uma.AIOCB |
|
vm.uma.AIOCB.bucket_size |
Desired per-cpu cache size |
vm.uma.AIOCB.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.AIOCB.domain |
|
vm.uma.AIOCB.domain.[num] |
|
vm.uma.AIOCB.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.AIOCB.domain.[num].imax |
maximum item count in this period |
vm.uma.AIOCB.domain.[num].imin |
minimum item count in this period |
vm.uma.AIOCB.domain.[num].limin |
Long time minimum item count |
vm.uma.AIOCB.domain.[num].nitems |
number of items in this domain |
vm.uma.AIOCB.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.AIOCB.domain.[num].wss |
Working set size |
vm.uma.AIOCB.flags |
Allocator configuration flags |
vm.uma.AIOCB.keg |
|
vm.uma.AIOCB.keg.align |
item alignment mask |
vm.uma.AIOCB.keg.domain |
|
vm.uma.AIOCB.keg.domain.[num] |
|
vm.uma.AIOCB.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.AIOCB.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.AIOCB.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.AIOCB.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.AIOCB.keg.ipers |
items available per-slab |
vm.uma.AIOCB.keg.name |
Keg name |
vm.uma.AIOCB.keg.ppera |
pages per-slab allocation |
vm.uma.AIOCB.keg.reserve |
number of reserved items |
vm.uma.AIOCB.keg.rsize |
Real object size with alignment |
vm.uma.AIOCB.limit |
|
vm.uma.AIOCB.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.AIOCB.limit.items |
Current number of allocated items if limit is set |
vm.uma.AIOCB.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.AIOCB.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.AIOCB.limit.sleeps |
Total zone limit sleeps |
vm.uma.AIOCB.size |
Allocation size |
vm.uma.AIOCB.stats |
|
vm.uma.AIOCB.stats.allocs |
Total allocation calls |
vm.uma.AIOCB.stats.current |
Current number of allocated items |
vm.uma.AIOCB.stats.fails |
Number of allocation failures |
vm.uma.AIOCB.stats.frees |
Total free calls |
vm.uma.AIOCB.stats.xdomain |
Free calls from the wrong domain |
vm.uma.AIOLIO |
|
vm.uma.AIOLIO.bucket_size |
Desired per-cpu cache size |
vm.uma.AIOLIO.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.AIOLIO.domain |
|
vm.uma.AIOLIO.domain.[num] |
|
vm.uma.AIOLIO.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.AIOLIO.domain.[num].imax |
maximum item count in this period |
vm.uma.AIOLIO.domain.[num].imin |
minimum item count in this period |
vm.uma.AIOLIO.domain.[num].limin |
Long time minimum item count |
vm.uma.AIOLIO.domain.[num].nitems |
number of items in this domain |
vm.uma.AIOLIO.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.AIOLIO.domain.[num].wss |
Working set size |
vm.uma.AIOLIO.flags |
Allocator configuration flags |
vm.uma.AIOLIO.keg |
|
vm.uma.AIOLIO.keg.align |
item alignment mask |
vm.uma.AIOLIO.keg.domain |
|
vm.uma.AIOLIO.keg.domain.[num] |
|
vm.uma.AIOLIO.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.AIOLIO.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.AIOLIO.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.AIOLIO.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.AIOLIO.keg.ipers |
items available per-slab |
vm.uma.AIOLIO.keg.name |
Keg name |
vm.uma.AIOLIO.keg.ppera |
pages per-slab allocation |
vm.uma.AIOLIO.keg.reserve |
number of reserved items |
vm.uma.AIOLIO.keg.rsize |
Real object size with alignment |
vm.uma.AIOLIO.limit |
|
vm.uma.AIOLIO.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.AIOLIO.limit.items |
Current number of allocated items if limit is set |
vm.uma.AIOLIO.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.AIOLIO.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.AIOLIO.limit.sleeps |
Total zone limit sleeps |
vm.uma.AIOLIO.size |
Allocation size |
vm.uma.AIOLIO.stats |
|
vm.uma.AIOLIO.stats.allocs |
Total allocation calls |
vm.uma.AIOLIO.stats.current |
Current number of allocated items |
vm.uma.AIOLIO.stats.fails |
Number of allocation failures |
vm.uma.AIOLIO.stats.frees |
Total free calls |
vm.uma.AIOLIO.stats.xdomain |
Free calls from the wrong domain |
vm.uma.BUF_TRIE |
|
vm.uma.BUF_TRIE.bucket_size |
Desired per-cpu cache size |
vm.uma.BUF_TRIE.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.BUF_TRIE.domain |
|
vm.uma.BUF_TRIE.domain.[num] |
|
vm.uma.BUF_TRIE.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.BUF_TRIE.domain.[num].imax |
maximum item count in this period |
vm.uma.BUF_TRIE.domain.[num].imin |
minimum item count in this period |
vm.uma.BUF_TRIE.domain.[num].limin |
Long time minimum item count |
vm.uma.BUF_TRIE.domain.[num].nitems |
number of items in this domain |
vm.uma.BUF_TRIE.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.BUF_TRIE.domain.[num].wss |
Working set size |
vm.uma.BUF_TRIE.flags |
Allocator configuration flags |
vm.uma.BUF_TRIE.keg |
|
vm.uma.BUF_TRIE.keg.align |
item alignment mask |
vm.uma.BUF_TRIE.keg.domain |
|
vm.uma.BUF_TRIE.keg.domain.[num] |
|
vm.uma.BUF_TRIE.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.BUF_TRIE.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.BUF_TRIE.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.BUF_TRIE.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.BUF_TRIE.keg.ipers |
items available per-slab |
vm.uma.BUF_TRIE.keg.name |
Keg name |
vm.uma.BUF_TRIE.keg.ppera |
pages per-slab allocation |
vm.uma.BUF_TRIE.keg.reserve |
number of reserved items |
vm.uma.BUF_TRIE.keg.rsize |
Real object size with alignment |
vm.uma.BUF_TRIE.limit |
|
vm.uma.BUF_TRIE.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.BUF_TRIE.limit.items |
Current number of allocated items if limit is set |
vm.uma.BUF_TRIE.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.BUF_TRIE.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.BUF_TRIE.limit.sleeps |
Total zone limit sleeps |
vm.uma.BUF_TRIE.size |
Allocation size |
vm.uma.BUF_TRIE.stats |
|
vm.uma.BUF_TRIE.stats.allocs |
Total allocation calls |
vm.uma.BUF_TRIE.stats.current |
Current number of allocated items |
vm.uma.BUF_TRIE.stats.fails |
Number of allocation failures |
vm.uma.BUF_TRIE.stats.frees |
Total free calls |
vm.uma.BUF_TRIE.stats.xdomain |
Free calls from the wrong domain |
vm.uma.DEVCTL |
|
vm.uma.DEVCTL.bucket_size |
Desired per-cpu cache size |
vm.uma.DEVCTL.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.DEVCTL.domain |
|
vm.uma.DEVCTL.domain.[num] |
|
vm.uma.DEVCTL.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.DEVCTL.domain.[num].imax |
maximum item count in this period |
vm.uma.DEVCTL.domain.[num].imin |
minimum item count in this period |
vm.uma.DEVCTL.domain.[num].limin |
Long time minimum item count |
vm.uma.DEVCTL.domain.[num].nitems |
number of items in this domain |
vm.uma.DEVCTL.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.DEVCTL.domain.[num].wss |
Working set size |
vm.uma.DEVCTL.flags |
Allocator configuration flags |
vm.uma.DEVCTL.keg |
|
vm.uma.DEVCTL.keg.align |
item alignment mask |
vm.uma.DEVCTL.keg.domain |
|
vm.uma.DEVCTL.keg.domain.[num] |
|
vm.uma.DEVCTL.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.DEVCTL.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.DEVCTL.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.DEVCTL.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.DEVCTL.keg.ipers |
items available per-slab |
vm.uma.DEVCTL.keg.name |
Keg name |
vm.uma.DEVCTL.keg.ppera |
pages per-slab allocation |
vm.uma.DEVCTL.keg.reserve |
number of reserved items |
vm.uma.DEVCTL.keg.rsize |
Real object size with alignment |
vm.uma.DEVCTL.limit |
|
vm.uma.DEVCTL.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.DEVCTL.limit.items |
Current number of allocated items if limit is set |
vm.uma.DEVCTL.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.DEVCTL.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.DEVCTL.limit.sleeps |
Total zone limit sleeps |
vm.uma.DEVCTL.size |
Allocation size |
vm.uma.DEVCTL.stats |
|
vm.uma.DEVCTL.stats.allocs |
Total allocation calls |
vm.uma.DEVCTL.stats.current |
Current number of allocated items |
vm.uma.DEVCTL.stats.fails |
Number of allocation failures |
vm.uma.DEVCTL.stats.frees |
Total free calls |
vm.uma.DEVCTL.stats.xdomain |
Free calls from the wrong domain |
vm.uma.DIRHASH |
|
vm.uma.DIRHASH.bucket_size |
Desired per-cpu cache size |
vm.uma.DIRHASH.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.DIRHASH.domain |
|
vm.uma.DIRHASH.domain.[num] |
|
vm.uma.DIRHASH.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.DIRHASH.domain.[num].imax |
maximum item count in this period |
vm.uma.DIRHASH.domain.[num].imin |
minimum item count in this period |
vm.uma.DIRHASH.domain.[num].limin |
Long time minimum item count |
vm.uma.DIRHASH.domain.[num].nitems |
number of items in this domain |
vm.uma.DIRHASH.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.DIRHASH.domain.[num].wss |
Working set size |
vm.uma.DIRHASH.flags |
Allocator configuration flags |
vm.uma.DIRHASH.keg |
|
vm.uma.DIRHASH.keg.align |
item alignment mask |
vm.uma.DIRHASH.keg.domain |
|
vm.uma.DIRHASH.keg.domain.[num] |
|
vm.uma.DIRHASH.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.DIRHASH.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.DIRHASH.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.DIRHASH.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.DIRHASH.keg.ipers |
items available per-slab |
vm.uma.DIRHASH.keg.name |
Keg name |
vm.uma.DIRHASH.keg.ppera |
pages per-slab allocation |
vm.uma.DIRHASH.keg.reserve |
number of reserved items |
vm.uma.DIRHASH.keg.rsize |
Real object size with alignment |
vm.uma.DIRHASH.limit |
|
vm.uma.DIRHASH.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.DIRHASH.limit.items |
Current number of allocated items if limit is set |
vm.uma.DIRHASH.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.DIRHASH.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.DIRHASH.limit.sleeps |
Total zone limit sleeps |
vm.uma.DIRHASH.size |
Allocation size |
vm.uma.DIRHASH.stats |
|
vm.uma.DIRHASH.stats.allocs |
Total allocation calls |
vm.uma.DIRHASH.stats.current |
Current number of allocated items |
vm.uma.DIRHASH.stats.fails |
Number of allocation failures |
vm.uma.DIRHASH.stats.frees |
Total free calls |
vm.uma.DIRHASH.stats.xdomain |
Free calls from the wrong domain |
vm.uma.FFS[num]_dinode |
|
vm.uma.FFS[num]_dinode.bucket_size |
Desired per-cpu cache size |
vm.uma.FFS[num]_dinode.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.FFS[num]_dinode.domain |
|
vm.uma.FFS[num]_dinode.domain.[num] |
|
vm.uma.FFS[num]_dinode.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.FFS[num]_dinode.domain.[num].imax |
maximum item count in this period |
vm.uma.FFS[num]_dinode.domain.[num].imin |
minimum item count in this period |
vm.uma.FFS[num]_dinode.domain.[num].limin |
Long time minimum item count |
vm.uma.FFS[num]_dinode.domain.[num].nitems |
number of items in this domain |
vm.uma.FFS[num]_dinode.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.FFS[num]_dinode.domain.[num].wss |
Working set size |
vm.uma.FFS[num]_dinode.flags |
Allocator configuration flags |
vm.uma.FFS[num]_dinode.keg |
|
vm.uma.FFS[num]_dinode.keg.align |
item alignment mask |
vm.uma.FFS[num]_dinode.keg.domain |
|
vm.uma.FFS[num]_dinode.keg.domain.[num] |
|
vm.uma.FFS[num]_dinode.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.FFS[num]_dinode.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.FFS[num]_dinode.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.FFS[num]_dinode.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.FFS[num]_dinode.keg.ipers |
items available per-slab |
vm.uma.FFS[num]_dinode.keg.name |
Keg name |
vm.uma.FFS[num]_dinode.keg.ppera |
pages per-slab allocation |
vm.uma.FFS[num]_dinode.keg.reserve |
number of reserved items |
vm.uma.FFS[num]_dinode.keg.rsize |
Real object size with alignment |
vm.uma.FFS[num]_dinode.limit |
|
vm.uma.FFS[num]_dinode.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.FFS[num]_dinode.limit.items |
Current number of allocated items if limit is set |
vm.uma.FFS[num]_dinode.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.FFS[num]_dinode.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.FFS[num]_dinode.limit.sleeps |
Total zone limit sleeps |
vm.uma.FFS[num]_dinode.size |
Allocation size |
vm.uma.FFS[num]_dinode.stats |
|
vm.uma.FFS[num]_dinode.stats.allocs |
Total allocation calls |
vm.uma.FFS[num]_dinode.stats.current |
Current number of allocated items |
vm.uma.FFS[num]_dinode.stats.fails |
Number of allocation failures |
vm.uma.FFS[num]_dinode.stats.frees |
Total free calls |
vm.uma.FFS[num]_dinode.stats.xdomain |
Free calls from the wrong domain |
vm.uma.FFS_inode |
|
vm.uma.FFS_inode.bucket_size |
Desired per-cpu cache size |
vm.uma.FFS_inode.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.FFS_inode.domain |
|
vm.uma.FFS_inode.domain.[num] |
|
vm.uma.FFS_inode.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.FFS_inode.domain.[num].imax |
maximum item count in this period |
vm.uma.FFS_inode.domain.[num].imin |
minimum item count in this period |
vm.uma.FFS_inode.domain.[num].limin |
Long time minimum item count |
vm.uma.FFS_inode.domain.[num].nitems |
number of items in this domain |
vm.uma.FFS_inode.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.FFS_inode.domain.[num].wss |
Working set size |
vm.uma.FFS_inode.flags |
Allocator configuration flags |
vm.uma.FFS_inode.keg |
|
vm.uma.FFS_inode.keg.align |
item alignment mask |
vm.uma.FFS_inode.keg.domain |
|
vm.uma.FFS_inode.keg.domain.[num] |
|
vm.uma.FFS_inode.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.FFS_inode.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.FFS_inode.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.FFS_inode.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.FFS_inode.keg.ipers |
items available per-slab |
vm.uma.FFS_inode.keg.name |
Keg name |
vm.uma.FFS_inode.keg.ppera |
pages per-slab allocation |
vm.uma.FFS_inode.keg.reserve |
number of reserved items |
vm.uma.FFS_inode.keg.rsize |
Real object size with alignment |
vm.uma.FFS_inode.limit |
|
vm.uma.FFS_inode.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.FFS_inode.limit.items |
Current number of allocated items if limit is set |
vm.uma.FFS_inode.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.FFS_inode.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.FFS_inode.limit.sleeps |
Total zone limit sleeps |
vm.uma.FFS_inode.size |
Allocation size |
vm.uma.FFS_inode.stats |
|
vm.uma.FFS_inode.stats.allocs |
Total allocation calls |
vm.uma.FFS_inode.stats.current |
Current number of allocated items |
vm.uma.FFS_inode.stats.fails |
Number of allocation failures |
vm.uma.FFS_inode.stats.frees |
Total free calls |
vm.uma.FFS_inode.stats.xdomain |
Free calls from the wrong domain |
vm.uma.FPU_save_area |
|
vm.uma.FPU_save_area.bucket_size |
Desired per-cpu cache size |
vm.uma.FPU_save_area.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.FPU_save_area.domain |
|
vm.uma.FPU_save_area.domain.[num] |
|
vm.uma.FPU_save_area.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.FPU_save_area.domain.[num].imax |
maximum item count in this period |
vm.uma.FPU_save_area.domain.[num].imin |
minimum item count in this period |
vm.uma.FPU_save_area.domain.[num].limin |
Long time minimum item count |
vm.uma.FPU_save_area.domain.[num].nitems |
number of items in this domain |
vm.uma.FPU_save_area.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.FPU_save_area.domain.[num].wss |
Working set size |
vm.uma.FPU_save_area.flags |
Allocator configuration flags |
vm.uma.FPU_save_area.keg |
|
vm.uma.FPU_save_area.keg.align |
item alignment mask |
vm.uma.FPU_save_area.keg.domain |
|
vm.uma.FPU_save_area.keg.domain.[num] |
|
vm.uma.FPU_save_area.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.FPU_save_area.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.FPU_save_area.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.FPU_save_area.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.FPU_save_area.keg.ipers |
items available per-slab |
vm.uma.FPU_save_area.keg.name |
Keg name |
vm.uma.FPU_save_area.keg.ppera |
pages per-slab allocation |
vm.uma.FPU_save_area.keg.reserve |
number of reserved items |
vm.uma.FPU_save_area.keg.rsize |
Real object size with alignment |
vm.uma.FPU_save_area.limit |
|
vm.uma.FPU_save_area.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.FPU_save_area.limit.items |
Current number of allocated items if limit is set |
vm.uma.FPU_save_area.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.FPU_save_area.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.FPU_save_area.limit.sleeps |
Total zone limit sleeps |
vm.uma.FPU_save_area.size |
Allocation size |
vm.uma.FPU_save_area.stats |
|
vm.uma.FPU_save_area.stats.allocs |
Total allocation calls |
vm.uma.FPU_save_area.stats.current |
Current number of allocated items |
vm.uma.FPU_save_area.stats.fails |
Number of allocation failures |
vm.uma.FPU_save_area.stats.frees |
Total free calls |
vm.uma.FPU_save_area.stats.xdomain |
Free calls from the wrong domain |
vm.uma.Files |
|
vm.uma.Files.bucket_size |
Desired per-cpu cache size |
vm.uma.Files.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.Files.domain |
|
vm.uma.Files.domain.[num] |
|
vm.uma.Files.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.Files.domain.[num].imax |
maximum item count in this period |
vm.uma.Files.domain.[num].imin |
minimum item count in this period |
vm.uma.Files.domain.[num].limin |
Long time minimum item count |
vm.uma.Files.domain.[num].nitems |
number of items in this domain |
vm.uma.Files.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.Files.domain.[num].wss |
Working set size |
vm.uma.Files.flags |
Allocator configuration flags |
vm.uma.Files.keg |
|
vm.uma.Files.keg.align |
item alignment mask |
vm.uma.Files.keg.domain |
|
vm.uma.Files.keg.domain.[num] |
|
vm.uma.Files.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.Files.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.Files.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.Files.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.Files.keg.ipers |
items available per-slab |
vm.uma.Files.keg.name |
Keg name |
vm.uma.Files.keg.ppera |
pages per-slab allocation |
vm.uma.Files.keg.reserve |
number of reserved items |
vm.uma.Files.keg.rsize |
Real object size with alignment |
vm.uma.Files.limit |
|
vm.uma.Files.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.Files.limit.items |
Current number of allocated items if limit is set |
vm.uma.Files.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.Files.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.Files.limit.sleeps |
Total zone limit sleeps |
vm.uma.Files.size |
Allocation size |
vm.uma.Files.stats |
|
vm.uma.Files.stats.allocs |
Total allocation calls |
vm.uma.Files.stats.current |
Current number of allocated items |
vm.uma.Files.stats.fails |
Number of allocation failures |
vm.uma.Files.stats.frees |
Total free calls |
vm.uma.Files.stats.xdomain |
Free calls from the wrong domain |
vm.uma.GELI_buffers |
|
vm.uma.GELI_buffers.bucket_size |
Desired per-cpu cache size |
vm.uma.GELI_buffers.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.GELI_buffers.domain |
|
vm.uma.GELI_buffers.domain.[num] |
|
vm.uma.GELI_buffers.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.GELI_buffers.domain.[num].imax |
maximum item count in this period |
vm.uma.GELI_buffers.domain.[num].imin |
minimum item count in this period |
vm.uma.GELI_buffers.domain.[num].limin |
Long time minimum item count |
vm.uma.GELI_buffers.domain.[num].nitems |
number of items in this domain |
vm.uma.GELI_buffers.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.GELI_buffers.domain.[num].wss |
Working set size |
vm.uma.GELI_buffers.flags |
Allocator configuration flags |
vm.uma.GELI_buffers.keg |
|
vm.uma.GELI_buffers.keg.align |
item alignment mask |
vm.uma.GELI_buffers.keg.domain |
|
vm.uma.GELI_buffers.keg.domain.[num] |
|
vm.uma.GELI_buffers.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.GELI_buffers.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.GELI_buffers.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.GELI_buffers.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.GELI_buffers.keg.ipers |
items available per-slab |
vm.uma.GELI_buffers.keg.name |
Keg name |
vm.uma.GELI_buffers.keg.ppera |
pages per-slab allocation |
vm.uma.GELI_buffers.keg.reserve |
number of reserved items |
vm.uma.GELI_buffers.keg.rsize |
Real object size with alignment |
vm.uma.GELI_buffers.limit |
|
vm.uma.GELI_buffers.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.GELI_buffers.limit.items |
Current number of allocated items if limit is set |
vm.uma.GELI_buffers.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.GELI_buffers.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.GELI_buffers.limit.sleeps |
Total zone limit sleeps |
vm.uma.GELI_buffers.size |
Allocation size |
vm.uma.GELI_buffers.stats |
|
vm.uma.GELI_buffers.stats.allocs |
Total allocation calls |
vm.uma.GELI_buffers.stats.current |
Current number of allocated items |
vm.uma.GELI_buffers.stats.fails |
Number of allocation failures |
vm.uma.GELI_buffers.stats.frees |
Total free calls |
vm.uma.GELI_buffers.stats.xdomain |
Free calls from the wrong domain |
vm.uma.IOMMU_MAP_ENTRY |
|
vm.uma.IOMMU_MAP_ENTRY.bucket_size |
Desired per-cpu cache size |
vm.uma.IOMMU_MAP_ENTRY.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.IOMMU_MAP_ENTRY.domain |
|
vm.uma.IOMMU_MAP_ENTRY.domain.[num] |
|
vm.uma.IOMMU_MAP_ENTRY.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.IOMMU_MAP_ENTRY.domain.[num].imax |
maximum item count in this period |
vm.uma.IOMMU_MAP_ENTRY.domain.[num].imin |
minimum item count in this period |
vm.uma.IOMMU_MAP_ENTRY.domain.[num].limin |
Long time minimum item count |
vm.uma.IOMMU_MAP_ENTRY.domain.[num].nitems |
number of items in this domain |
vm.uma.IOMMU_MAP_ENTRY.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.IOMMU_MAP_ENTRY.domain.[num].wss |
Working set size |
vm.uma.IOMMU_MAP_ENTRY.flags |
Allocator configuration flags |
vm.uma.IOMMU_MAP_ENTRY.keg |
|
vm.uma.IOMMU_MAP_ENTRY.keg.align |
item alignment mask |
vm.uma.IOMMU_MAP_ENTRY.keg.domain |
|
vm.uma.IOMMU_MAP_ENTRY.keg.domain.[num] |
|
vm.uma.IOMMU_MAP_ENTRY.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.IOMMU_MAP_ENTRY.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.IOMMU_MAP_ENTRY.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.IOMMU_MAP_ENTRY.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.IOMMU_MAP_ENTRY.keg.ipers |
items available per-slab |
vm.uma.IOMMU_MAP_ENTRY.keg.name |
Keg name |
vm.uma.IOMMU_MAP_ENTRY.keg.ppera |
pages per-slab allocation |
vm.uma.IOMMU_MAP_ENTRY.keg.reserve |
number of reserved items |
vm.uma.IOMMU_MAP_ENTRY.keg.rsize |
Real object size with alignment |
vm.uma.IOMMU_MAP_ENTRY.limit |
|
vm.uma.IOMMU_MAP_ENTRY.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.IOMMU_MAP_ENTRY.limit.items |
Current number of allocated items if limit is set |
vm.uma.IOMMU_MAP_ENTRY.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.IOMMU_MAP_ENTRY.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.IOMMU_MAP_ENTRY.limit.sleeps |
Total zone limit sleeps |
vm.uma.IOMMU_MAP_ENTRY.size |
Allocation size |
vm.uma.IOMMU_MAP_ENTRY.stats |
|
vm.uma.IOMMU_MAP_ENTRY.stats.allocs |
Total allocation calls |
vm.uma.IOMMU_MAP_ENTRY.stats.current |
Current number of allocated items |
vm.uma.IOMMU_MAP_ENTRY.stats.fails |
Number of allocation failures |
vm.uma.IOMMU_MAP_ENTRY.stats.frees |
Total free calls |
vm.uma.IOMMU_MAP_ENTRY.stats.xdomain |
Free calls from the wrong domain |
vm.uma.IPsec_SA_lft_c |
|
vm.uma.IPsec_SA_lft_c.bucket_size |
Desired per-cpu cache size |
vm.uma.IPsec_SA_lft_c.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.IPsec_SA_lft_c.domain |
|
vm.uma.IPsec_SA_lft_c.domain.[num] |
|
vm.uma.IPsec_SA_lft_c.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.IPsec_SA_lft_c.domain.[num].imax |
maximum item count in this period |
vm.uma.IPsec_SA_lft_c.domain.[num].imin |
minimum item count in this period |
vm.uma.IPsec_SA_lft_c.domain.[num].limin |
Long time minimum item count |
vm.uma.IPsec_SA_lft_c.domain.[num].nitems |
number of items in this domain |
vm.uma.IPsec_SA_lft_c.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.IPsec_SA_lft_c.domain.[num].wss |
Working set size |
vm.uma.IPsec_SA_lft_c.flags |
Allocator configuration flags |
vm.uma.IPsec_SA_lft_c.keg |
|
vm.uma.IPsec_SA_lft_c.keg.align |
item alignment mask |
vm.uma.IPsec_SA_lft_c.keg.domain |
|
vm.uma.IPsec_SA_lft_c.keg.domain.[num] |
|
vm.uma.IPsec_SA_lft_c.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.IPsec_SA_lft_c.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.IPsec_SA_lft_c.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.IPsec_SA_lft_c.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.IPsec_SA_lft_c.keg.ipers |
items available per-slab |
vm.uma.IPsec_SA_lft_c.keg.name |
Keg name |
vm.uma.IPsec_SA_lft_c.keg.ppera |
pages per-slab allocation |
vm.uma.IPsec_SA_lft_c.keg.reserve |
number of reserved items |
vm.uma.IPsec_SA_lft_c.keg.rsize |
Real object size with alignment |
vm.uma.IPsec_SA_lft_c.limit |
|
vm.uma.IPsec_SA_lft_c.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.IPsec_SA_lft_c.limit.items |
Current number of allocated items if limit is set |
vm.uma.IPsec_SA_lft_c.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.IPsec_SA_lft_c.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.IPsec_SA_lft_c.limit.sleeps |
Total zone limit sleeps |
vm.uma.IPsec_SA_lft_c.size |
Allocation size |
vm.uma.IPsec_SA_lft_c.stats |
|
vm.uma.IPsec_SA_lft_c.stats.allocs |
Total allocation calls |
vm.uma.IPsec_SA_lft_c.stats.current |
Current number of allocated items |
vm.uma.IPsec_SA_lft_c.stats.fails |
Number of allocation failures |
vm.uma.IPsec_SA_lft_c.stats.frees |
Total free calls |
vm.uma.IPsec_SA_lft_c.stats.xdomain |
Free calls from the wrong domain |
vm.uma.KMAP_ENTRY |
|
vm.uma.KMAP_ENTRY.bucket_size |
Desired per-cpu cache size |
vm.uma.KMAP_ENTRY.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.KMAP_ENTRY.domain |
|
vm.uma.KMAP_ENTRY.domain.[num] |
|
vm.uma.KMAP_ENTRY.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.KMAP_ENTRY.domain.[num].imax |
maximum item count in this period |
vm.uma.KMAP_ENTRY.domain.[num].imin |
minimum item count in this period |
vm.uma.KMAP_ENTRY.domain.[num].limin |
Long time minimum item count |
vm.uma.KMAP_ENTRY.domain.[num].nitems |
number of items in this domain |
vm.uma.KMAP_ENTRY.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.KMAP_ENTRY.domain.[num].wss |
Working set size |
vm.uma.KMAP_ENTRY.flags |
Allocator configuration flags |
vm.uma.KMAP_ENTRY.keg |
|
vm.uma.KMAP_ENTRY.keg.align |
item alignment mask |
vm.uma.KMAP_ENTRY.keg.domain |
|
vm.uma.KMAP_ENTRY.keg.domain.[num] |
|
vm.uma.KMAP_ENTRY.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.KMAP_ENTRY.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.KMAP_ENTRY.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.KMAP_ENTRY.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.KMAP_ENTRY.keg.ipers |
items available per-slab |
vm.uma.KMAP_ENTRY.keg.name |
Keg name |
vm.uma.KMAP_ENTRY.keg.ppera |
pages per-slab allocation |
vm.uma.KMAP_ENTRY.keg.reserve |
number of reserved items |
vm.uma.KMAP_ENTRY.keg.rsize |
Real object size with alignment |
vm.uma.KMAP_ENTRY.limit |
|
vm.uma.KMAP_ENTRY.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.KMAP_ENTRY.limit.items |
Current number of allocated items if limit is set |
vm.uma.KMAP_ENTRY.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.KMAP_ENTRY.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.KMAP_ENTRY.limit.sleeps |
Total zone limit sleeps |
vm.uma.KMAP_ENTRY.size |
Allocation size |
vm.uma.KMAP_ENTRY.stats |
|
vm.uma.KMAP_ENTRY.stats.allocs |
Total allocation calls |
vm.uma.KMAP_ENTRY.stats.current |
Current number of allocated items |
vm.uma.KMAP_ENTRY.stats.fails |
Number of allocation failures |
vm.uma.KMAP_ENTRY.stats.frees |
Total free calls |
vm.uma.KMAP_ENTRY.stats.xdomain |
Free calls from the wrong domain |
vm.uma.KNOTE |
|
vm.uma.KNOTE.bucket_size |
Desired per-cpu cache size |
vm.uma.KNOTE.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.KNOTE.domain |
|
vm.uma.KNOTE.domain.[num] |
|
vm.uma.KNOTE.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.KNOTE.domain.[num].imax |
maximum item count in this period |
vm.uma.KNOTE.domain.[num].imin |
minimum item count in this period |
vm.uma.KNOTE.domain.[num].limin |
Long time minimum item count |
vm.uma.KNOTE.domain.[num].nitems |
number of items in this domain |
vm.uma.KNOTE.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.KNOTE.domain.[num].wss |
Working set size |
vm.uma.KNOTE.flags |
Allocator configuration flags |
vm.uma.KNOTE.keg |
|
vm.uma.KNOTE.keg.align |
item alignment mask |
vm.uma.KNOTE.keg.domain |
|
vm.uma.KNOTE.keg.domain.[num] |
|
vm.uma.KNOTE.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.KNOTE.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.KNOTE.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.KNOTE.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.KNOTE.keg.ipers |
items available per-slab |
vm.uma.KNOTE.keg.name |
Keg name |
vm.uma.KNOTE.keg.ppera |
pages per-slab allocation |
vm.uma.KNOTE.keg.reserve |
number of reserved items |
vm.uma.KNOTE.keg.rsize |
Real object size with alignment |
vm.uma.KNOTE.limit |
|
vm.uma.KNOTE.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.KNOTE.limit.items |
Current number of allocated items if limit is set |
vm.uma.KNOTE.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.KNOTE.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.KNOTE.limit.sleeps |
Total zone limit sleeps |
vm.uma.KNOTE.size |
Allocation size |
vm.uma.KNOTE.stats |
|
vm.uma.KNOTE.stats.allocs |
Total allocation calls |
vm.uma.KNOTE.stats.current |
Current number of allocated items |
vm.uma.KNOTE.stats.fails |
Number of allocation failures |
vm.uma.KNOTE.stats.frees |
Total free calls |
vm.uma.KNOTE.stats.xdomain |
Free calls from the wrong domain |
vm.uma.LTS_VFS_Cache |
|
vm.uma.LTS_VFS_Cache.bucket_size |
Desired per-cpu cache size |
vm.uma.LTS_VFS_Cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.LTS_VFS_Cache.domain |
|
vm.uma.LTS_VFS_Cache.domain.[num] |
|
vm.uma.LTS_VFS_Cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.LTS_VFS_Cache.domain.[num].imax |
maximum item count in this period |
vm.uma.LTS_VFS_Cache.domain.[num].imin |
minimum item count in this period |
vm.uma.LTS_VFS_Cache.domain.[num].limin |
Long time minimum item count |
vm.uma.LTS_VFS_Cache.domain.[num].nitems |
number of items in this domain |
vm.uma.LTS_VFS_Cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.LTS_VFS_Cache.domain.[num].wss |
Working set size |
vm.uma.LTS_VFS_Cache.flags |
Allocator configuration flags |
vm.uma.LTS_VFS_Cache.keg |
|
vm.uma.LTS_VFS_Cache.keg.align |
item alignment mask |
vm.uma.LTS_VFS_Cache.keg.domain |
|
vm.uma.LTS_VFS_Cache.keg.domain.[num] |
|
vm.uma.LTS_VFS_Cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.LTS_VFS_Cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.LTS_VFS_Cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.LTS_VFS_Cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.LTS_VFS_Cache.keg.ipers |
items available per-slab |
vm.uma.LTS_VFS_Cache.keg.name |
Keg name |
vm.uma.LTS_VFS_Cache.keg.ppera |
pages per-slab allocation |
vm.uma.LTS_VFS_Cache.keg.reserve |
number of reserved items |
vm.uma.LTS_VFS_Cache.keg.rsize |
Real object size with alignment |
vm.uma.LTS_VFS_Cache.limit |
|
vm.uma.LTS_VFS_Cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.LTS_VFS_Cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.LTS_VFS_Cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.LTS_VFS_Cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.LTS_VFS_Cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.LTS_VFS_Cache.size |
Allocation size |
vm.uma.LTS_VFS_Cache.stats |
|
vm.uma.LTS_VFS_Cache.stats.allocs |
Total allocation calls |
vm.uma.LTS_VFS_Cache.stats.current |
Current number of allocated items |
vm.uma.LTS_VFS_Cache.stats.fails |
Number of allocation failures |
vm.uma.LTS_VFS_Cache.stats.frees |
Total free calls |
vm.uma.LTS_VFS_Cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.L_VFS_Cache |
|
vm.uma.L_VFS_Cache.bucket_size |
Desired per-cpu cache size |
vm.uma.L_VFS_Cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.L_VFS_Cache.domain |
|
vm.uma.L_VFS_Cache.domain.[num] |
|
vm.uma.L_VFS_Cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.L_VFS_Cache.domain.[num].imax |
maximum item count in this period |
vm.uma.L_VFS_Cache.domain.[num].imin |
minimum item count in this period |
vm.uma.L_VFS_Cache.domain.[num].limin |
Long time minimum item count |
vm.uma.L_VFS_Cache.domain.[num].nitems |
number of items in this domain |
vm.uma.L_VFS_Cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.L_VFS_Cache.domain.[num].wss |
Working set size |
vm.uma.L_VFS_Cache.flags |
Allocator configuration flags |
vm.uma.L_VFS_Cache.keg |
|
vm.uma.L_VFS_Cache.keg.align |
item alignment mask |
vm.uma.L_VFS_Cache.keg.domain |
|
vm.uma.L_VFS_Cache.keg.domain.[num] |
|
vm.uma.L_VFS_Cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.L_VFS_Cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.L_VFS_Cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.L_VFS_Cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.L_VFS_Cache.keg.ipers |
items available per-slab |
vm.uma.L_VFS_Cache.keg.name |
Keg name |
vm.uma.L_VFS_Cache.keg.ppera |
pages per-slab allocation |
vm.uma.L_VFS_Cache.keg.reserve |
number of reserved items |
vm.uma.L_VFS_Cache.keg.rsize |
Real object size with alignment |
vm.uma.L_VFS_Cache.limit |
|
vm.uma.L_VFS_Cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.L_VFS_Cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.L_VFS_Cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.L_VFS_Cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.L_VFS_Cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.L_VFS_Cache.size |
Allocation size |
vm.uma.L_VFS_Cache.stats |
|
vm.uma.L_VFS_Cache.stats.allocs |
Total allocation calls |
vm.uma.L_VFS_Cache.stats.current |
Current number of allocated items |
vm.uma.L_VFS_Cache.stats.fails |
Number of allocation failures |
vm.uma.L_VFS_Cache.stats.frees |
Total free calls |
vm.uma.L_VFS_Cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.MAC_labels |
|
vm.uma.MAC_labels.bucket_size |
Desired per-cpu cache size |
vm.uma.MAC_labels.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.MAC_labels.domain |
|
vm.uma.MAC_labels.domain.[num] |
|
vm.uma.MAC_labels.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.MAC_labels.domain.[num].imax |
maximum item count in this period |
vm.uma.MAC_labels.domain.[num].imin |
minimum item count in this period |
vm.uma.MAC_labels.domain.[num].limin |
Long time minimum item count |
vm.uma.MAC_labels.domain.[num].nitems |
number of items in this domain |
vm.uma.MAC_labels.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.MAC_labels.domain.[num].wss |
Working set size |
vm.uma.MAC_labels.flags |
Allocator configuration flags |
vm.uma.MAC_labels.keg |
|
vm.uma.MAC_labels.keg.align |
item alignment mask |
vm.uma.MAC_labels.keg.domain |
|
vm.uma.MAC_labels.keg.domain.[num] |
|
vm.uma.MAC_labels.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.MAC_labels.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.MAC_labels.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.MAC_labels.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.MAC_labels.keg.ipers |
items available per-slab |
vm.uma.MAC_labels.keg.name |
Keg name |
vm.uma.MAC_labels.keg.ppera |
pages per-slab allocation |
vm.uma.MAC_labels.keg.reserve |
number of reserved items |
vm.uma.MAC_labels.keg.rsize |
Real object size with alignment |
vm.uma.MAC_labels.limit |
|
vm.uma.MAC_labels.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.MAC_labels.limit.items |
Current number of allocated items if limit is set |
vm.uma.MAC_labels.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.MAC_labels.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.MAC_labels.limit.sleeps |
Total zone limit sleeps |
vm.uma.MAC_labels.size |
Allocation size |
vm.uma.MAC_labels.stats |
|
vm.uma.MAC_labels.stats.allocs |
Total allocation calls |
vm.uma.MAC_labels.stats.current |
Current number of allocated items |
vm.uma.MAC_labels.stats.fails |
Number of allocation failures |
vm.uma.MAC_labels.stats.frees |
Total free calls |
vm.uma.MAC_labels.stats.xdomain |
Free calls from the wrong domain |
vm.uma.MAP_ENTRY |
|
vm.uma.MAP_ENTRY.bucket_size |
Desired per-cpu cache size |
vm.uma.MAP_ENTRY.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.MAP_ENTRY.domain |
|
vm.uma.MAP_ENTRY.domain.[num] |
|
vm.uma.MAP_ENTRY.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.MAP_ENTRY.domain.[num].imax |
maximum item count in this period |
vm.uma.MAP_ENTRY.domain.[num].imin |
minimum item count in this period |
vm.uma.MAP_ENTRY.domain.[num].limin |
Long time minimum item count |
vm.uma.MAP_ENTRY.domain.[num].nitems |
number of items in this domain |
vm.uma.MAP_ENTRY.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.MAP_ENTRY.domain.[num].wss |
Working set size |
vm.uma.MAP_ENTRY.flags |
Allocator configuration flags |
vm.uma.MAP_ENTRY.keg |
|
vm.uma.MAP_ENTRY.keg.align |
item alignment mask |
vm.uma.MAP_ENTRY.keg.domain |
|
vm.uma.MAP_ENTRY.keg.domain.[num] |
|
vm.uma.MAP_ENTRY.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.MAP_ENTRY.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.MAP_ENTRY.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.MAP_ENTRY.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.MAP_ENTRY.keg.ipers |
items available per-slab |
vm.uma.MAP_ENTRY.keg.name |
Keg name |
vm.uma.MAP_ENTRY.keg.ppera |
pages per-slab allocation |
vm.uma.MAP_ENTRY.keg.reserve |
number of reserved items |
vm.uma.MAP_ENTRY.keg.rsize |
Real object size with alignment |
vm.uma.MAP_ENTRY.limit |
|
vm.uma.MAP_ENTRY.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.MAP_ENTRY.limit.items |
Current number of allocated items if limit is set |
vm.uma.MAP_ENTRY.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.MAP_ENTRY.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.MAP_ENTRY.limit.sleeps |
Total zone limit sleeps |
vm.uma.MAP_ENTRY.size |
Allocation size |
vm.uma.MAP_ENTRY.stats |
|
vm.uma.MAP_ENTRY.stats.allocs |
Total allocation calls |
vm.uma.MAP_ENTRY.stats.current |
Current number of allocated items |
vm.uma.MAP_ENTRY.stats.fails |
Number of allocation failures |
vm.uma.MAP_ENTRY.stats.frees |
Total free calls |
vm.uma.MAP_ENTRY.stats.xdomain |
Free calls from the wrong domain |
vm.uma.Mountpoints |
|
vm.uma.Mountpoints.bucket_size |
Desired per-cpu cache size |
vm.uma.Mountpoints.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.Mountpoints.domain |
|
vm.uma.Mountpoints.domain.[num] |
|
vm.uma.Mountpoints.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.Mountpoints.domain.[num].imax |
maximum item count in this period |
vm.uma.Mountpoints.domain.[num].imin |
minimum item count in this period |
vm.uma.Mountpoints.domain.[num].limin |
Long time minimum item count |
vm.uma.Mountpoints.domain.[num].nitems |
number of items in this domain |
vm.uma.Mountpoints.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.Mountpoints.domain.[num].wss |
Working set size |
vm.uma.Mountpoints.flags |
Allocator configuration flags |
vm.uma.Mountpoints.keg |
|
vm.uma.Mountpoints.keg.align |
item alignment mask |
vm.uma.Mountpoints.keg.domain |
|
vm.uma.Mountpoints.keg.domain.[num] |
|
vm.uma.Mountpoints.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.Mountpoints.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.Mountpoints.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.Mountpoints.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.Mountpoints.keg.ipers |
items available per-slab |
vm.uma.Mountpoints.keg.name |
Keg name |
vm.uma.Mountpoints.keg.ppera |
pages per-slab allocation |
vm.uma.Mountpoints.keg.reserve |
number of reserved items |
vm.uma.Mountpoints.keg.rsize |
Real object size with alignment |
vm.uma.Mountpoints.limit |
|
vm.uma.Mountpoints.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.Mountpoints.limit.items |
Current number of allocated items if limit is set |
vm.uma.Mountpoints.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.Mountpoints.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.Mountpoints.limit.sleeps |
Total zone limit sleeps |
vm.uma.Mountpoints.size |
Allocation size |
vm.uma.Mountpoints.stats |
|
vm.uma.Mountpoints.stats.allocs |
Total allocation calls |
vm.uma.Mountpoints.stats.current |
Current number of allocated items |
vm.uma.Mountpoints.stats.fails |
Number of allocation failures |
vm.uma.Mountpoints.stats.frees |
Total free calls |
vm.uma.Mountpoints.stats.xdomain |
Free calls from the wrong domain |
vm.uma.NAMEI |
|
vm.uma.NAMEI.bucket_size |
Desired per-cpu cache size |
vm.uma.NAMEI.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.NAMEI.domain |
|
vm.uma.NAMEI.domain.[num] |
|
vm.uma.NAMEI.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.NAMEI.domain.[num].imax |
maximum item count in this period |
vm.uma.NAMEI.domain.[num].imin |
minimum item count in this period |
vm.uma.NAMEI.domain.[num].limin |
Long time minimum item count |
vm.uma.NAMEI.domain.[num].nitems |
number of items in this domain |
vm.uma.NAMEI.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.NAMEI.domain.[num].wss |
Working set size |
vm.uma.NAMEI.flags |
Allocator configuration flags |
vm.uma.NAMEI.keg |
|
vm.uma.NAMEI.keg.align |
item alignment mask |
vm.uma.NAMEI.keg.domain |
|
vm.uma.NAMEI.keg.domain.[num] |
|
vm.uma.NAMEI.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.NAMEI.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.NAMEI.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.NAMEI.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.NAMEI.keg.ipers |
items available per-slab |
vm.uma.NAMEI.keg.name |
Keg name |
vm.uma.NAMEI.keg.ppera |
pages per-slab allocation |
vm.uma.NAMEI.keg.reserve |
number of reserved items |
vm.uma.NAMEI.keg.rsize |
Real object size with alignment |
vm.uma.NAMEI.limit |
|
vm.uma.NAMEI.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.NAMEI.limit.items |
Current number of allocated items if limit is set |
vm.uma.NAMEI.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.NAMEI.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.NAMEI.limit.sleeps |
Total zone limit sleeps |
vm.uma.NAMEI.size |
Allocation size |
vm.uma.NAMEI.stats |
|
vm.uma.NAMEI.stats.allocs |
Total allocation calls |
vm.uma.NAMEI.stats.current |
Current number of allocated items |
vm.uma.NAMEI.stats.fails |
Number of allocation failures |
vm.uma.NAMEI.stats.frees |
Total free calls |
vm.uma.NAMEI.stats.xdomain |
Free calls from the wrong domain |
vm.uma.NCLNODE |
|
vm.uma.NCLNODE.bucket_size |
Desired per-cpu cache size |
vm.uma.NCLNODE.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.NCLNODE.domain |
|
vm.uma.NCLNODE.domain.[num] |
|
vm.uma.NCLNODE.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.NCLNODE.domain.[num].imax |
maximum item count in this period |
vm.uma.NCLNODE.domain.[num].imin |
minimum item count in this period |
vm.uma.NCLNODE.domain.[num].limin |
Long time minimum item count |
vm.uma.NCLNODE.domain.[num].nitems |
number of items in this domain |
vm.uma.NCLNODE.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.NCLNODE.domain.[num].wss |
Working set size |
vm.uma.NCLNODE.flags |
Allocator configuration flags |
vm.uma.NCLNODE.keg |
|
vm.uma.NCLNODE.keg.align |
item alignment mask |
vm.uma.NCLNODE.keg.domain |
|
vm.uma.NCLNODE.keg.domain.[num] |
|
vm.uma.NCLNODE.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.NCLNODE.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.NCLNODE.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.NCLNODE.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.NCLNODE.keg.ipers |
items available per-slab |
vm.uma.NCLNODE.keg.name |
Keg name |
vm.uma.NCLNODE.keg.ppera |
pages per-slab allocation |
vm.uma.NCLNODE.keg.reserve |
number of reserved items |
vm.uma.NCLNODE.keg.rsize |
Real object size with alignment |
vm.uma.NCLNODE.limit |
|
vm.uma.NCLNODE.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.NCLNODE.limit.items |
Current number of allocated items if limit is set |
vm.uma.NCLNODE.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.NCLNODE.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.NCLNODE.limit.sleeps |
Total zone limit sleeps |
vm.uma.NCLNODE.size |
Allocation size |
vm.uma.NCLNODE.stats |
|
vm.uma.NCLNODE.stats.allocs |
Total allocation calls |
vm.uma.NCLNODE.stats.current |
Current number of allocated items |
vm.uma.NCLNODE.stats.fails |
Number of allocation failures |
vm.uma.NCLNODE.stats.frees |
Total free calls |
vm.uma.NCLNODE.stats.xdomain |
Free calls from the wrong domain |
vm.uma.NetGraph_data_items |
|
vm.uma.NetGraph_data_items.bucket_size |
Desired per-cpu cache size |
vm.uma.NetGraph_data_items.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.NetGraph_data_items.domain |
|
vm.uma.NetGraph_data_items.domain.[num] |
|
vm.uma.NetGraph_data_items.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.NetGraph_data_items.domain.[num].imax |
maximum item count in this period |
vm.uma.NetGraph_data_items.domain.[num].imin |
minimum item count in this period |
vm.uma.NetGraph_data_items.domain.[num].limin |
Long time minimum item count |
vm.uma.NetGraph_data_items.domain.[num].nitems |
number of items in this domain |
vm.uma.NetGraph_data_items.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.NetGraph_data_items.domain.[num].wss |
Working set size |
vm.uma.NetGraph_data_items.flags |
Allocator configuration flags |
vm.uma.NetGraph_data_items.keg |
|
vm.uma.NetGraph_data_items.keg.align |
item alignment mask |
vm.uma.NetGraph_data_items.keg.domain |
|
vm.uma.NetGraph_data_items.keg.domain.[num] |
|
vm.uma.NetGraph_data_items.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.NetGraph_data_items.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.NetGraph_data_items.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.NetGraph_data_items.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.NetGraph_data_items.keg.ipers |
items available per-slab |
vm.uma.NetGraph_data_items.keg.name |
Keg name |
vm.uma.NetGraph_data_items.keg.ppera |
pages per-slab allocation |
vm.uma.NetGraph_data_items.keg.reserve |
number of reserved items |
vm.uma.NetGraph_data_items.keg.rsize |
Real object size with alignment |
vm.uma.NetGraph_data_items.limit |
|
vm.uma.NetGraph_data_items.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.NetGraph_data_items.limit.items |
Current number of allocated items if limit is set |
vm.uma.NetGraph_data_items.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.NetGraph_data_items.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.NetGraph_data_items.limit.sleeps |
Total zone limit sleeps |
vm.uma.NetGraph_data_items.size |
Allocation size |
vm.uma.NetGraph_data_items.stats |
|
vm.uma.NetGraph_data_items.stats.allocs |
Total allocation calls |
vm.uma.NetGraph_data_items.stats.current |
Current number of allocated items |
vm.uma.NetGraph_data_items.stats.fails |
Number of allocation failures |
vm.uma.NetGraph_data_items.stats.frees |
Total free calls |
vm.uma.NetGraph_data_items.stats.xdomain |
Free calls from the wrong domain |
vm.uma.NetGraph_items |
|
vm.uma.NetGraph_items.bucket_size |
Desired per-cpu cache size |
vm.uma.NetGraph_items.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.NetGraph_items.domain |
|
vm.uma.NetGraph_items.domain.[num] |
|
vm.uma.NetGraph_items.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.NetGraph_items.domain.[num].imax |
maximum item count in this period |
vm.uma.NetGraph_items.domain.[num].imin |
minimum item count in this period |
vm.uma.NetGraph_items.domain.[num].limin |
Long time minimum item count |
vm.uma.NetGraph_items.domain.[num].nitems |
number of items in this domain |
vm.uma.NetGraph_items.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.NetGraph_items.domain.[num].wss |
Working set size |
vm.uma.NetGraph_items.flags |
Allocator configuration flags |
vm.uma.NetGraph_items.keg |
|
vm.uma.NetGraph_items.keg.align |
item alignment mask |
vm.uma.NetGraph_items.keg.domain |
|
vm.uma.NetGraph_items.keg.domain.[num] |
|
vm.uma.NetGraph_items.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.NetGraph_items.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.NetGraph_items.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.NetGraph_items.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.NetGraph_items.keg.ipers |
items available per-slab |
vm.uma.NetGraph_items.keg.name |
Keg name |
vm.uma.NetGraph_items.keg.ppera |
pages per-slab allocation |
vm.uma.NetGraph_items.keg.reserve |
number of reserved items |
vm.uma.NetGraph_items.keg.rsize |
Real object size with alignment |
vm.uma.NetGraph_items.limit |
|
vm.uma.NetGraph_items.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.NetGraph_items.limit.items |
Current number of allocated items if limit is set |
vm.uma.NetGraph_items.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.NetGraph_items.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.NetGraph_items.limit.sleeps |
Total zone limit sleeps |
vm.uma.NetGraph_items.size |
Allocation size |
vm.uma.NetGraph_items.stats |
|
vm.uma.NetGraph_items.stats.allocs |
Total allocation calls |
vm.uma.NetGraph_items.stats.current |
Current number of allocated items |
vm.uma.NetGraph_items.stats.fails |
Number of allocation failures |
vm.uma.NetGraph_items.stats.frees |
Total free calls |
vm.uma.NetGraph_items.stats.xdomain |
Free calls from the wrong domain |
vm.uma.PGRP |
|
vm.uma.PGRP.bucket_size |
Desired per-cpu cache size |
vm.uma.PGRP.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.PGRP.domain |
|
vm.uma.PGRP.domain.[num] |
|
vm.uma.PGRP.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.PGRP.domain.[num].imax |
maximum item count in this period |
vm.uma.PGRP.domain.[num].imin |
minimum item count in this period |
vm.uma.PGRP.domain.[num].limin |
Long time minimum item count |
vm.uma.PGRP.domain.[num].nitems |
number of items in this domain |
vm.uma.PGRP.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.PGRP.domain.[num].wss |
Working set size |
vm.uma.PGRP.flags |
Allocator configuration flags |
vm.uma.PGRP.keg |
|
vm.uma.PGRP.keg.align |
item alignment mask |
vm.uma.PGRP.keg.domain |
|
vm.uma.PGRP.keg.domain.[num] |
|
vm.uma.PGRP.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.PGRP.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.PGRP.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.PGRP.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.PGRP.keg.ipers |
items available per-slab |
vm.uma.PGRP.keg.name |
Keg name |
vm.uma.PGRP.keg.ppera |
pages per-slab allocation |
vm.uma.PGRP.keg.reserve |
number of reserved items |
vm.uma.PGRP.keg.rsize |
Real object size with alignment |
vm.uma.PGRP.limit |
|
vm.uma.PGRP.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.PGRP.limit.items |
Current number of allocated items if limit is set |
vm.uma.PGRP.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.PGRP.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.PGRP.limit.sleeps |
Total zone limit sleeps |
vm.uma.PGRP.size |
Allocation size |
vm.uma.PGRP.stats |
|
vm.uma.PGRP.stats.allocs |
Total allocation calls |
vm.uma.PGRP.stats.current |
Current number of allocated items |
vm.uma.PGRP.stats.fails |
Number of allocation failures |
vm.uma.PGRP.stats.frees |
Total free calls |
vm.uma.PGRP.stats.xdomain |
Free calls from the wrong domain |
vm.uma.PROC |
|
vm.uma.PROC.bucket_size |
Desired per-cpu cache size |
vm.uma.PROC.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.PROC.domain |
|
vm.uma.PROC.domain.[num] |
|
vm.uma.PROC.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.PROC.domain.[num].imax |
maximum item count in this period |
vm.uma.PROC.domain.[num].imin |
minimum item count in this period |
vm.uma.PROC.domain.[num].limin |
Long time minimum item count |
vm.uma.PROC.domain.[num].nitems |
number of items in this domain |
vm.uma.PROC.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.PROC.domain.[num].wss |
Working set size |
vm.uma.PROC.flags |
Allocator configuration flags |
vm.uma.PROC.keg |
|
vm.uma.PROC.keg.align |
item alignment mask |
vm.uma.PROC.keg.domain |
|
vm.uma.PROC.keg.domain.[num] |
|
vm.uma.PROC.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.PROC.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.PROC.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.PROC.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.PROC.keg.ipers |
items available per-slab |
vm.uma.PROC.keg.name |
Keg name |
vm.uma.PROC.keg.ppera |
pages per-slab allocation |
vm.uma.PROC.keg.reserve |
number of reserved items |
vm.uma.PROC.keg.rsize |
Real object size with alignment |
vm.uma.PROC.limit |
|
vm.uma.PROC.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.PROC.limit.items |
Current number of allocated items if limit is set |
vm.uma.PROC.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.PROC.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.PROC.limit.sleeps |
Total zone limit sleeps |
vm.uma.PROC.size |
Allocation size |
vm.uma.PROC.stats |
|
vm.uma.PROC.stats.allocs |
Total allocation calls |
vm.uma.PROC.stats.current |
Current number of allocated items |
vm.uma.PROC.stats.fails |
Number of allocation failures |
vm.uma.PROC.stats.frees |
Total free calls |
vm.uma.PROC.stats.xdomain |
Free calls from the wrong domain |
vm.uma.PWD |
|
vm.uma.PWD.bucket_size |
Desired per-cpu cache size |
vm.uma.PWD.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.PWD.domain |
|
vm.uma.PWD.domain.[num] |
|
vm.uma.PWD.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.PWD.domain.[num].imax |
maximum item count in this period |
vm.uma.PWD.domain.[num].imin |
minimum item count in this period |
vm.uma.PWD.domain.[num].limin |
Long time minimum item count |
vm.uma.PWD.domain.[num].nitems |
number of items in this domain |
vm.uma.PWD.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.PWD.domain.[num].wss |
Working set size |
vm.uma.PWD.flags |
Allocator configuration flags |
vm.uma.PWD.keg |
|
vm.uma.PWD.keg.align |
item alignment mask |
vm.uma.PWD.keg.domain |
|
vm.uma.PWD.keg.domain.[num] |
|
vm.uma.PWD.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.PWD.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.PWD.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.PWD.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.PWD.keg.ipers |
items available per-slab |
vm.uma.PWD.keg.name |
Keg name |
vm.uma.PWD.keg.ppera |
pages per-slab allocation |
vm.uma.PWD.keg.reserve |
number of reserved items |
vm.uma.PWD.keg.rsize |
Real object size with alignment |
vm.uma.PWD.limit |
|
vm.uma.PWD.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.PWD.limit.items |
Current number of allocated items if limit is set |
vm.uma.PWD.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.PWD.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.PWD.limit.sleeps |
Total zone limit sleeps |
vm.uma.PWD.size |
Allocation size |
vm.uma.PWD.stats |
|
vm.uma.PWD.stats.allocs |
Total allocation calls |
vm.uma.PWD.stats.current |
Current number of allocated items |
vm.uma.PWD.stats.fails |
Number of allocation failures |
vm.uma.PWD.stats.frees |
Total free calls |
vm.uma.PWD.stats.xdomain |
Free calls from the wrong domain |
vm.uma.RADIX_NODE |
|
vm.uma.RADIX_NODE.bucket_size |
Desired per-cpu cache size |
vm.uma.RADIX_NODE.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.RADIX_NODE.domain |
|
vm.uma.RADIX_NODE.domain.[num] |
|
vm.uma.RADIX_NODE.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.RADIX_NODE.domain.[num].imax |
maximum item count in this period |
vm.uma.RADIX_NODE.domain.[num].imin |
minimum item count in this period |
vm.uma.RADIX_NODE.domain.[num].limin |
Long time minimum item count |
vm.uma.RADIX_NODE.domain.[num].nitems |
number of items in this domain |
vm.uma.RADIX_NODE.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.RADIX_NODE.domain.[num].wss |
Working set size |
vm.uma.RADIX_NODE.flags |
Allocator configuration flags |
vm.uma.RADIX_NODE.keg |
|
vm.uma.RADIX_NODE.keg.align |
item alignment mask |
vm.uma.RADIX_NODE.keg.domain |
|
vm.uma.RADIX_NODE.keg.domain.[num] |
|
vm.uma.RADIX_NODE.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.RADIX_NODE.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.RADIX_NODE.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.RADIX_NODE.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.RADIX_NODE.keg.ipers |
items available per-slab |
vm.uma.RADIX_NODE.keg.name |
Keg name |
vm.uma.RADIX_NODE.keg.ppera |
pages per-slab allocation |
vm.uma.RADIX_NODE.keg.reserve |
number of reserved items |
vm.uma.RADIX_NODE.keg.rsize |
Real object size with alignment |
vm.uma.RADIX_NODE.limit |
|
vm.uma.RADIX_NODE.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.RADIX_NODE.limit.items |
Current number of allocated items if limit is set |
vm.uma.RADIX_NODE.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.RADIX_NODE.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.RADIX_NODE.limit.sleeps |
Total zone limit sleeps |
vm.uma.RADIX_NODE.size |
Allocation size |
vm.uma.RADIX_NODE.stats |
|
vm.uma.RADIX_NODE.stats.allocs |
Total allocation calls |
vm.uma.RADIX_NODE.stats.current |
Current number of allocated items |
vm.uma.RADIX_NODE.stats.fails |
Number of allocation failures |
vm.uma.RADIX_NODE.stats.frees |
Total free calls |
vm.uma.RADIX_NODE.stats.xdomain |
Free calls from the wrong domain |
vm.uma.SLEEPQUEUE |
|
vm.uma.SLEEPQUEUE.bucket_size |
Desired per-cpu cache size |
vm.uma.SLEEPQUEUE.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.SLEEPQUEUE.domain |
|
vm.uma.SLEEPQUEUE.domain.[num] |
|
vm.uma.SLEEPQUEUE.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.SLEEPQUEUE.domain.[num].imax |
maximum item count in this period |
vm.uma.SLEEPQUEUE.domain.[num].imin |
minimum item count in this period |
vm.uma.SLEEPQUEUE.domain.[num].limin |
Long time minimum item count |
vm.uma.SLEEPQUEUE.domain.[num].nitems |
number of items in this domain |
vm.uma.SLEEPQUEUE.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.SLEEPQUEUE.domain.[num].wss |
Working set size |
vm.uma.SLEEPQUEUE.flags |
Allocator configuration flags |
vm.uma.SLEEPQUEUE.keg |
|
vm.uma.SLEEPQUEUE.keg.align |
item alignment mask |
vm.uma.SLEEPQUEUE.keg.domain |
|
vm.uma.SLEEPQUEUE.keg.domain.[num] |
|
vm.uma.SLEEPQUEUE.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.SLEEPQUEUE.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.SLEEPQUEUE.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.SLEEPQUEUE.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.SLEEPQUEUE.keg.ipers |
items available per-slab |
vm.uma.SLEEPQUEUE.keg.name |
Keg name |
vm.uma.SLEEPQUEUE.keg.ppera |
pages per-slab allocation |
vm.uma.SLEEPQUEUE.keg.reserve |
number of reserved items |
vm.uma.SLEEPQUEUE.keg.rsize |
Real object size with alignment |
vm.uma.SLEEPQUEUE.limit |
|
vm.uma.SLEEPQUEUE.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.SLEEPQUEUE.limit.items |
Current number of allocated items if limit is set |
vm.uma.SLEEPQUEUE.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.SLEEPQUEUE.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.SLEEPQUEUE.limit.sleeps |
Total zone limit sleeps |
vm.uma.SLEEPQUEUE.size |
Allocation size |
vm.uma.SLEEPQUEUE.stats |
|
vm.uma.SLEEPQUEUE.stats.allocs |
Total allocation calls |
vm.uma.SLEEPQUEUE.stats.current |
Current number of allocated items |
vm.uma.SLEEPQUEUE.stats.fails |
Number of allocation failures |
vm.uma.SLEEPQUEUE.stats.frees |
Total free calls |
vm.uma.SLEEPQUEUE.stats.xdomain |
Free calls from the wrong domain |
vm.uma.SMR_CPU |
|
vm.uma.SMR_CPU.bucket_size |
Desired per-cpu cache size |
vm.uma.SMR_CPU.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.SMR_CPU.domain |
|
vm.uma.SMR_CPU.domain.[num] |
|
vm.uma.SMR_CPU.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.SMR_CPU.domain.[num].imax |
maximum item count in this period |
vm.uma.SMR_CPU.domain.[num].imin |
minimum item count in this period |
vm.uma.SMR_CPU.domain.[num].limin |
Long time minimum item count |
vm.uma.SMR_CPU.domain.[num].nitems |
number of items in this domain |
vm.uma.SMR_CPU.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.SMR_CPU.domain.[num].wss |
Working set size |
vm.uma.SMR_CPU.flags |
Allocator configuration flags |
vm.uma.SMR_CPU.keg |
|
vm.uma.SMR_CPU.keg.align |
item alignment mask |
vm.uma.SMR_CPU.keg.domain |
|
vm.uma.SMR_CPU.keg.domain.[num] |
|
vm.uma.SMR_CPU.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.SMR_CPU.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.SMR_CPU.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.SMR_CPU.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.SMR_CPU.keg.ipers |
items available per-slab |
vm.uma.SMR_CPU.keg.name |
Keg name |
vm.uma.SMR_CPU.keg.ppera |
pages per-slab allocation |
vm.uma.SMR_CPU.keg.reserve |
number of reserved items |
vm.uma.SMR_CPU.keg.rsize |
Real object size with alignment |
vm.uma.SMR_CPU.limit |
|
vm.uma.SMR_CPU.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.SMR_CPU.limit.items |
Current number of allocated items if limit is set |
vm.uma.SMR_CPU.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.SMR_CPU.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.SMR_CPU.limit.sleeps |
Total zone limit sleeps |
vm.uma.SMR_CPU.size |
Allocation size |
vm.uma.SMR_CPU.stats |
|
vm.uma.SMR_CPU.stats.allocs |
Total allocation calls |
vm.uma.SMR_CPU.stats.current |
Current number of allocated items |
vm.uma.SMR_CPU.stats.fails |
Number of allocation failures |
vm.uma.SMR_CPU.stats.frees |
Total free calls |
vm.uma.SMR_CPU.stats.xdomain |
Free calls from the wrong domain |
vm.uma.SMR_SHARED |
|
vm.uma.SMR_SHARED.bucket_size |
Desired per-cpu cache size |
vm.uma.SMR_SHARED.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.SMR_SHARED.domain |
|
vm.uma.SMR_SHARED.domain.[num] |
|
vm.uma.SMR_SHARED.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.SMR_SHARED.domain.[num].imax |
maximum item count in this period |
vm.uma.SMR_SHARED.domain.[num].imin |
minimum item count in this period |
vm.uma.SMR_SHARED.domain.[num].limin |
Long time minimum item count |
vm.uma.SMR_SHARED.domain.[num].nitems |
number of items in this domain |
vm.uma.SMR_SHARED.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.SMR_SHARED.domain.[num].wss |
Working set size |
vm.uma.SMR_SHARED.flags |
Allocator configuration flags |
vm.uma.SMR_SHARED.keg |
|
vm.uma.SMR_SHARED.keg.align |
item alignment mask |
vm.uma.SMR_SHARED.keg.domain |
|
vm.uma.SMR_SHARED.keg.domain.[num] |
|
vm.uma.SMR_SHARED.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.SMR_SHARED.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.SMR_SHARED.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.SMR_SHARED.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.SMR_SHARED.keg.ipers |
items available per-slab |
vm.uma.SMR_SHARED.keg.name |
Keg name |
vm.uma.SMR_SHARED.keg.ppera |
pages per-slab allocation |
vm.uma.SMR_SHARED.keg.reserve |
number of reserved items |
vm.uma.SMR_SHARED.keg.rsize |
Real object size with alignment |
vm.uma.SMR_SHARED.limit |
|
vm.uma.SMR_SHARED.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.SMR_SHARED.limit.items |
Current number of allocated items if limit is set |
vm.uma.SMR_SHARED.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.SMR_SHARED.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.SMR_SHARED.limit.sleeps |
Total zone limit sleeps |
vm.uma.SMR_SHARED.size |
Allocation size |
vm.uma.SMR_SHARED.stats |
|
vm.uma.SMR_SHARED.stats.allocs |
Total allocation calls |
vm.uma.SMR_SHARED.stats.current |
Current number of allocated items |
vm.uma.SMR_SHARED.stats.fails |
Number of allocation failures |
vm.uma.SMR_SHARED.stats.frees |
Total free calls |
vm.uma.SMR_SHARED.stats.xdomain |
Free calls from the wrong domain |
vm.uma.STS_VFS_Cache |
|
vm.uma.STS_VFS_Cache.bucket_size |
Desired per-cpu cache size |
vm.uma.STS_VFS_Cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.STS_VFS_Cache.domain |
|
vm.uma.STS_VFS_Cache.domain.[num] |
|
vm.uma.STS_VFS_Cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.STS_VFS_Cache.domain.[num].imax |
maximum item count in this period |
vm.uma.STS_VFS_Cache.domain.[num].imin |
minimum item count in this period |
vm.uma.STS_VFS_Cache.domain.[num].limin |
Long time minimum item count |
vm.uma.STS_VFS_Cache.domain.[num].nitems |
number of items in this domain |
vm.uma.STS_VFS_Cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.STS_VFS_Cache.domain.[num].wss |
Working set size |
vm.uma.STS_VFS_Cache.flags |
Allocator configuration flags |
vm.uma.STS_VFS_Cache.keg |
|
vm.uma.STS_VFS_Cache.keg.align |
item alignment mask |
vm.uma.STS_VFS_Cache.keg.domain |
|
vm.uma.STS_VFS_Cache.keg.domain.[num] |
|
vm.uma.STS_VFS_Cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.STS_VFS_Cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.STS_VFS_Cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.STS_VFS_Cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.STS_VFS_Cache.keg.ipers |
items available per-slab |
vm.uma.STS_VFS_Cache.keg.name |
Keg name |
vm.uma.STS_VFS_Cache.keg.ppera |
pages per-slab allocation |
vm.uma.STS_VFS_Cache.keg.reserve |
number of reserved items |
vm.uma.STS_VFS_Cache.keg.rsize |
Real object size with alignment |
vm.uma.STS_VFS_Cache.limit |
|
vm.uma.STS_VFS_Cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.STS_VFS_Cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.STS_VFS_Cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.STS_VFS_Cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.STS_VFS_Cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.STS_VFS_Cache.size |
Allocation size |
vm.uma.STS_VFS_Cache.stats |
|
vm.uma.STS_VFS_Cache.stats.allocs |
Total allocation calls |
vm.uma.STS_VFS_Cache.stats.current |
Current number of allocated items |
vm.uma.STS_VFS_Cache.stats.fails |
Number of allocation failures |
vm.uma.STS_VFS_Cache.stats.frees |
Total free calls |
vm.uma.STS_VFS_Cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.S_VFS_Cache |
|
vm.uma.S_VFS_Cache.bucket_size |
Desired per-cpu cache size |
vm.uma.S_VFS_Cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.S_VFS_Cache.domain |
|
vm.uma.S_VFS_Cache.domain.[num] |
|
vm.uma.S_VFS_Cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.S_VFS_Cache.domain.[num].imax |
maximum item count in this period |
vm.uma.S_VFS_Cache.domain.[num].imin |
minimum item count in this period |
vm.uma.S_VFS_Cache.domain.[num].limin |
Long time minimum item count |
vm.uma.S_VFS_Cache.domain.[num].nitems |
number of items in this domain |
vm.uma.S_VFS_Cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.S_VFS_Cache.domain.[num].wss |
Working set size |
vm.uma.S_VFS_Cache.flags |
Allocator configuration flags |
vm.uma.S_VFS_Cache.keg |
|
vm.uma.S_VFS_Cache.keg.align |
item alignment mask |
vm.uma.S_VFS_Cache.keg.domain |
|
vm.uma.S_VFS_Cache.keg.domain.[num] |
|
vm.uma.S_VFS_Cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.S_VFS_Cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.S_VFS_Cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.S_VFS_Cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.S_VFS_Cache.keg.ipers |
items available per-slab |
vm.uma.S_VFS_Cache.keg.name |
Keg name |
vm.uma.S_VFS_Cache.keg.ppera |
pages per-slab allocation |
vm.uma.S_VFS_Cache.keg.reserve |
number of reserved items |
vm.uma.S_VFS_Cache.keg.rsize |
Real object size with alignment |
vm.uma.S_VFS_Cache.limit |
|
vm.uma.S_VFS_Cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.S_VFS_Cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.S_VFS_Cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.S_VFS_Cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.S_VFS_Cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.S_VFS_Cache.size |
Allocation size |
vm.uma.S_VFS_Cache.stats |
|
vm.uma.S_VFS_Cache.stats.allocs |
Total allocation calls |
vm.uma.S_VFS_Cache.stats.current |
Current number of allocated items |
vm.uma.S_VFS_Cache.stats.fails |
Number of allocation failures |
vm.uma.S_VFS_Cache.stats.frees |
Total free calls |
vm.uma.S_VFS_Cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.THREAD |
|
vm.uma.THREAD.bucket_size |
Desired per-cpu cache size |
vm.uma.THREAD.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.THREAD.domain |
|
vm.uma.THREAD.domain.[num] |
|
vm.uma.THREAD.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.THREAD.domain.[num].imax |
maximum item count in this period |
vm.uma.THREAD.domain.[num].imin |
minimum item count in this period |
vm.uma.THREAD.domain.[num].limin |
Long time minimum item count |
vm.uma.THREAD.domain.[num].nitems |
number of items in this domain |
vm.uma.THREAD.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.THREAD.domain.[num].wss |
Working set size |
vm.uma.THREAD.flags |
Allocator configuration flags |
vm.uma.THREAD.keg |
|
vm.uma.THREAD.keg.align |
item alignment mask |
vm.uma.THREAD.keg.domain |
|
vm.uma.THREAD.keg.domain.[num] |
|
vm.uma.THREAD.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.THREAD.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.THREAD.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.THREAD.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.THREAD.keg.ipers |
items available per-slab |
vm.uma.THREAD.keg.name |
Keg name |
vm.uma.THREAD.keg.ppera |
pages per-slab allocation |
vm.uma.THREAD.keg.reserve |
number of reserved items |
vm.uma.THREAD.keg.rsize |
Real object size with alignment |
vm.uma.THREAD.limit |
|
vm.uma.THREAD.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.THREAD.limit.items |
Current number of allocated items if limit is set |
vm.uma.THREAD.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.THREAD.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.THREAD.limit.sleeps |
Total zone limit sleeps |
vm.uma.THREAD.size |
Allocation size |
vm.uma.THREAD.stats |
|
vm.uma.THREAD.stats.allocs |
Total allocation calls |
vm.uma.THREAD.stats.current |
Current number of allocated items |
vm.uma.THREAD.stats.fails |
Number of allocation failures |
vm.uma.THREAD.stats.frees |
Total free calls |
vm.uma.THREAD.stats.xdomain |
Free calls from the wrong domain |
vm.uma.TMPFS_node |
|
vm.uma.TMPFS_node.bucket_size |
Desired per-cpu cache size |
vm.uma.TMPFS_node.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.TMPFS_node.domain |
|
vm.uma.TMPFS_node.domain.[num] |
|
vm.uma.TMPFS_node.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.TMPFS_node.domain.[num].imax |
maximum item count in this period |
vm.uma.TMPFS_node.domain.[num].imin |
minimum item count in this period |
vm.uma.TMPFS_node.domain.[num].limin |
Long time minimum item count |
vm.uma.TMPFS_node.domain.[num].nitems |
number of items in this domain |
vm.uma.TMPFS_node.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.TMPFS_node.domain.[num].wss |
Working set size |
vm.uma.TMPFS_node.flags |
Allocator configuration flags |
vm.uma.TMPFS_node.keg |
|
vm.uma.TMPFS_node.keg.align |
item alignment mask |
vm.uma.TMPFS_node.keg.domain |
|
vm.uma.TMPFS_node.keg.domain.[num] |
|
vm.uma.TMPFS_node.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.TMPFS_node.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.TMPFS_node.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.TMPFS_node.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.TMPFS_node.keg.ipers |
items available per-slab |
vm.uma.TMPFS_node.keg.name |
Keg name |
vm.uma.TMPFS_node.keg.ppera |
pages per-slab allocation |
vm.uma.TMPFS_node.keg.reserve |
number of reserved items |
vm.uma.TMPFS_node.keg.rsize |
Real object size with alignment |
vm.uma.TMPFS_node.limit |
|
vm.uma.TMPFS_node.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.TMPFS_node.limit.items |
Current number of allocated items if limit is set |
vm.uma.TMPFS_node.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.TMPFS_node.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.TMPFS_node.limit.sleeps |
Total zone limit sleeps |
vm.uma.TMPFS_node.size |
Allocation size |
vm.uma.TMPFS_node.stats |
|
vm.uma.TMPFS_node.stats.allocs |
Total allocation calls |
vm.uma.TMPFS_node.stats.current |
Current number of allocated items |
vm.uma.TMPFS_node.stats.fails |
Number of allocation failures |
vm.uma.TMPFS_node.stats.frees |
Total free calls |
vm.uma.TMPFS_node.stats.xdomain |
Free calls from the wrong domain |
vm.uma.TURNSTILE |
|
vm.uma.TURNSTILE.bucket_size |
Desired per-cpu cache size |
vm.uma.TURNSTILE.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.TURNSTILE.domain |
|
vm.uma.TURNSTILE.domain.[num] |
|
vm.uma.TURNSTILE.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.TURNSTILE.domain.[num].imax |
maximum item count in this period |
vm.uma.TURNSTILE.domain.[num].imin |
minimum item count in this period |
vm.uma.TURNSTILE.domain.[num].limin |
Long time minimum item count |
vm.uma.TURNSTILE.domain.[num].nitems |
number of items in this domain |
vm.uma.TURNSTILE.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.TURNSTILE.domain.[num].wss |
Working set size |
vm.uma.TURNSTILE.flags |
Allocator configuration flags |
vm.uma.TURNSTILE.keg |
|
vm.uma.TURNSTILE.keg.align |
item alignment mask |
vm.uma.TURNSTILE.keg.domain |
|
vm.uma.TURNSTILE.keg.domain.[num] |
|
vm.uma.TURNSTILE.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.TURNSTILE.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.TURNSTILE.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.TURNSTILE.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.TURNSTILE.keg.ipers |
items available per-slab |
vm.uma.TURNSTILE.keg.name |
Keg name |
vm.uma.TURNSTILE.keg.ppera |
pages per-slab allocation |
vm.uma.TURNSTILE.keg.reserve |
number of reserved items |
vm.uma.TURNSTILE.keg.rsize |
Real object size with alignment |
vm.uma.TURNSTILE.limit |
|
vm.uma.TURNSTILE.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.TURNSTILE.limit.items |
Current number of allocated items if limit is set |
vm.uma.TURNSTILE.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.TURNSTILE.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.TURNSTILE.limit.sleeps |
Total zone limit sleeps |
vm.uma.TURNSTILE.size |
Allocation size |
vm.uma.TURNSTILE.stats |
|
vm.uma.TURNSTILE.stats.allocs |
Total allocation calls |
vm.uma.TURNSTILE.stats.current |
Current number of allocated items |
vm.uma.TURNSTILE.stats.fails |
Number of allocation failures |
vm.uma.TURNSTILE.stats.frees |
Total free calls |
vm.uma.TURNSTILE.stats.xdomain |
Free calls from the wrong domain |
vm.uma.UMA_Hash |
|
vm.uma.UMA_Hash.bucket_size |
Desired per-cpu cache size |
vm.uma.UMA_Hash.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.UMA_Hash.domain |
|
vm.uma.UMA_Hash.domain.[num] |
|
vm.uma.UMA_Hash.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.UMA_Hash.domain.[num].imax |
maximum item count in this period |
vm.uma.UMA_Hash.domain.[num].imin |
minimum item count in this period |
vm.uma.UMA_Hash.domain.[num].limin |
Long time minimum item count |
vm.uma.UMA_Hash.domain.[num].nitems |
number of items in this domain |
vm.uma.UMA_Hash.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.UMA_Hash.domain.[num].wss |
Working set size |
vm.uma.UMA_Hash.flags |
Allocator configuration flags |
vm.uma.UMA_Hash.keg |
|
vm.uma.UMA_Hash.keg.align |
item alignment mask |
vm.uma.UMA_Hash.keg.domain |
|
vm.uma.UMA_Hash.keg.domain.[num] |
|
vm.uma.UMA_Hash.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.UMA_Hash.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.UMA_Hash.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.UMA_Hash.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.UMA_Hash.keg.ipers |
items available per-slab |
vm.uma.UMA_Hash.keg.name |
Keg name |
vm.uma.UMA_Hash.keg.ppera |
pages per-slab allocation |
vm.uma.UMA_Hash.keg.reserve |
number of reserved items |
vm.uma.UMA_Hash.keg.rsize |
Real object size with alignment |
vm.uma.UMA_Hash.limit |
|
vm.uma.UMA_Hash.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.UMA_Hash.limit.items |
Current number of allocated items if limit is set |
vm.uma.UMA_Hash.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.UMA_Hash.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.UMA_Hash.limit.sleeps |
Total zone limit sleeps |
vm.uma.UMA_Hash.size |
Allocation size |
vm.uma.UMA_Hash.stats |
|
vm.uma.UMA_Hash.stats.allocs |
Total allocation calls |
vm.uma.UMA_Hash.stats.current |
Current number of allocated items |
vm.uma.UMA_Hash.stats.fails |
Number of allocation failures |
vm.uma.UMA_Hash.stats.frees |
Total free calls |
vm.uma.UMA_Hash.stats.xdomain |
Free calls from the wrong domain |
vm.uma.UMA_Kegs |
|
vm.uma.UMA_Kegs.bucket_size |
Desired per-cpu cache size |
vm.uma.UMA_Kegs.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.UMA_Kegs.domain |
|
vm.uma.UMA_Kegs.domain.[num] |
|
vm.uma.UMA_Kegs.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.UMA_Kegs.domain.[num].imax |
maximum item count in this period |
vm.uma.UMA_Kegs.domain.[num].imin |
minimum item count in this period |
vm.uma.UMA_Kegs.domain.[num].limin |
Long time minimum item count |
vm.uma.UMA_Kegs.domain.[num].nitems |
number of items in this domain |
vm.uma.UMA_Kegs.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.UMA_Kegs.domain.[num].wss |
Working set size |
vm.uma.UMA_Kegs.flags |
Allocator configuration flags |
vm.uma.UMA_Kegs.keg |
|
vm.uma.UMA_Kegs.keg.align |
item alignment mask |
vm.uma.UMA_Kegs.keg.domain |
|
vm.uma.UMA_Kegs.keg.domain.[num] |
|
vm.uma.UMA_Kegs.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.UMA_Kegs.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.UMA_Kegs.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.UMA_Kegs.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.UMA_Kegs.keg.ipers |
items available per-slab |
vm.uma.UMA_Kegs.keg.name |
Keg name |
vm.uma.UMA_Kegs.keg.ppera |
pages per-slab allocation |
vm.uma.UMA_Kegs.keg.reserve |
number of reserved items |
vm.uma.UMA_Kegs.keg.rsize |
Real object size with alignment |
vm.uma.UMA_Kegs.limit |
|
vm.uma.UMA_Kegs.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.UMA_Kegs.limit.items |
Current number of allocated items if limit is set |
vm.uma.UMA_Kegs.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.UMA_Kegs.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.UMA_Kegs.limit.sleeps |
Total zone limit sleeps |
vm.uma.UMA_Kegs.size |
Allocation size |
vm.uma.UMA_Kegs.stats |
|
vm.uma.UMA_Kegs.stats.allocs |
Total allocation calls |
vm.uma.UMA_Kegs.stats.current |
Current number of allocated items |
vm.uma.UMA_Kegs.stats.fails |
Number of allocation failures |
vm.uma.UMA_Kegs.stats.frees |
Total free calls |
vm.uma.UMA_Kegs.stats.xdomain |
Free calls from the wrong domain |
vm.uma.UMA_Slabs_[num] |
|
vm.uma.UMA_Slabs_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.UMA_Slabs_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.UMA_Slabs_[num].domain |
|
vm.uma.UMA_Slabs_[num].domain.[num] |
|
vm.uma.UMA_Slabs_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.UMA_Slabs_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.UMA_Slabs_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.UMA_Slabs_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.UMA_Slabs_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.UMA_Slabs_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.UMA_Slabs_[num].domain.[num].wss |
Working set size |
vm.uma.UMA_Slabs_[num].flags |
Allocator configuration flags |
vm.uma.UMA_Slabs_[num].keg |
|
vm.uma.UMA_Slabs_[num].keg.align |
item alignment mask |
vm.uma.UMA_Slabs_[num].keg.domain |
|
vm.uma.UMA_Slabs_[num].keg.domain.[num] |
|
vm.uma.UMA_Slabs_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.UMA_Slabs_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.UMA_Slabs_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.UMA_Slabs_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.UMA_Slabs_[num].keg.ipers |
items available per-slab |
vm.uma.UMA_Slabs_[num].keg.name |
Keg name |
vm.uma.UMA_Slabs_[num].keg.ppera |
pages per-slab allocation |
vm.uma.UMA_Slabs_[num].keg.reserve |
number of reserved items |
vm.uma.UMA_Slabs_[num].keg.rsize |
Real object size with alignment |
vm.uma.UMA_Slabs_[num].limit |
|
vm.uma.UMA_Slabs_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.UMA_Slabs_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.UMA_Slabs_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.UMA_Slabs_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.UMA_Slabs_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.UMA_Slabs_[num].size |
Allocation size |
vm.uma.UMA_Slabs_[num].stats |
|
vm.uma.UMA_Slabs_[num].stats.allocs |
Total allocation calls |
vm.uma.UMA_Slabs_[num].stats.current |
Current number of allocated items |
vm.uma.UMA_Slabs_[num].stats.fails |
Number of allocation failures |
vm.uma.UMA_Slabs_[num].stats.frees |
Total free calls |
vm.uma.UMA_Slabs_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.UMA_Zones |
|
vm.uma.UMA_Zones.bucket_size |
Desired per-cpu cache size |
vm.uma.UMA_Zones.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.UMA_Zones.domain |
|
vm.uma.UMA_Zones.domain.[num] |
|
vm.uma.UMA_Zones.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.UMA_Zones.domain.[num].imax |
maximum item count in this period |
vm.uma.UMA_Zones.domain.[num].imin |
minimum item count in this period |
vm.uma.UMA_Zones.domain.[num].limin |
Long time minimum item count |
vm.uma.UMA_Zones.domain.[num].nitems |
number of items in this domain |
vm.uma.UMA_Zones.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.UMA_Zones.domain.[num].wss |
Working set size |
vm.uma.UMA_Zones.flags |
Allocator configuration flags |
vm.uma.UMA_Zones.keg |
|
vm.uma.UMA_Zones.keg.align |
item alignment mask |
vm.uma.UMA_Zones.keg.domain |
|
vm.uma.UMA_Zones.keg.domain.[num] |
|
vm.uma.UMA_Zones.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.UMA_Zones.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.UMA_Zones.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.UMA_Zones.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.UMA_Zones.keg.ipers |
items available per-slab |
vm.uma.UMA_Zones.keg.name |
Keg name |
vm.uma.UMA_Zones.keg.ppera |
pages per-slab allocation |
vm.uma.UMA_Zones.keg.reserve |
number of reserved items |
vm.uma.UMA_Zones.keg.rsize |
Real object size with alignment |
vm.uma.UMA_Zones.limit |
|
vm.uma.UMA_Zones.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.UMA_Zones.limit.items |
Current number of allocated items if limit is set |
vm.uma.UMA_Zones.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.UMA_Zones.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.UMA_Zones.limit.sleeps |
Total zone limit sleeps |
vm.uma.UMA_Zones.size |
Allocation size |
vm.uma.UMA_Zones.stats |
|
vm.uma.UMA_Zones.stats.allocs |
Total allocation calls |
vm.uma.UMA_Zones.stats.current |
Current number of allocated items |
vm.uma.UMA_Zones.stats.fails |
Number of allocation failures |
vm.uma.UMA_Zones.stats.frees |
Total free calls |
vm.uma.UMA_Zones.stats.xdomain |
Free calls from the wrong domain |
vm.uma.VMSPACE |
|
vm.uma.VMSPACE.bucket_size |
Desired per-cpu cache size |
vm.uma.VMSPACE.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.VMSPACE.domain |
|
vm.uma.VMSPACE.domain.[num] |
|
vm.uma.VMSPACE.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.VMSPACE.domain.[num].imax |
maximum item count in this period |
vm.uma.VMSPACE.domain.[num].imin |
minimum item count in this period |
vm.uma.VMSPACE.domain.[num].limin |
Long time minimum item count |
vm.uma.VMSPACE.domain.[num].nitems |
number of items in this domain |
vm.uma.VMSPACE.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.VMSPACE.domain.[num].wss |
Working set size |
vm.uma.VMSPACE.flags |
Allocator configuration flags |
vm.uma.VMSPACE.keg |
|
vm.uma.VMSPACE.keg.align |
item alignment mask |
vm.uma.VMSPACE.keg.domain |
|
vm.uma.VMSPACE.keg.domain.[num] |
|
vm.uma.VMSPACE.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.VMSPACE.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.VMSPACE.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.VMSPACE.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.VMSPACE.keg.ipers |
items available per-slab |
vm.uma.VMSPACE.keg.name |
Keg name |
vm.uma.VMSPACE.keg.ppera |
pages per-slab allocation |
vm.uma.VMSPACE.keg.reserve |
number of reserved items |
vm.uma.VMSPACE.keg.rsize |
Real object size with alignment |
vm.uma.VMSPACE.limit |
|
vm.uma.VMSPACE.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.VMSPACE.limit.items |
Current number of allocated items if limit is set |
vm.uma.VMSPACE.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.VMSPACE.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.VMSPACE.limit.sleeps |
Total zone limit sleeps |
vm.uma.VMSPACE.size |
Allocation size |
vm.uma.VMSPACE.stats |
|
vm.uma.VMSPACE.stats.allocs |
Total allocation calls |
vm.uma.VMSPACE.stats.current |
Current number of allocated items |
vm.uma.VMSPACE.stats.fails |
Number of allocation failures |
vm.uma.VMSPACE.stats.frees |
Total free calls |
vm.uma.VMSPACE.stats.xdomain |
Free calls from the wrong domain |
vm.uma.VM_OBJECT |
|
vm.uma.VM_OBJECT.bucket_size |
Desired per-cpu cache size |
vm.uma.VM_OBJECT.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.VM_OBJECT.domain |
|
vm.uma.VM_OBJECT.domain.[num] |
|
vm.uma.VM_OBJECT.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.VM_OBJECT.domain.[num].imax |
maximum item count in this period |
vm.uma.VM_OBJECT.domain.[num].imin |
minimum item count in this period |
vm.uma.VM_OBJECT.domain.[num].limin |
Long time minimum item count |
vm.uma.VM_OBJECT.domain.[num].nitems |
number of items in this domain |
vm.uma.VM_OBJECT.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.VM_OBJECT.domain.[num].wss |
Working set size |
vm.uma.VM_OBJECT.flags |
Allocator configuration flags |
vm.uma.VM_OBJECT.keg |
|
vm.uma.VM_OBJECT.keg.align |
item alignment mask |
vm.uma.VM_OBJECT.keg.domain |
|
vm.uma.VM_OBJECT.keg.domain.[num] |
|
vm.uma.VM_OBJECT.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.VM_OBJECT.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.VM_OBJECT.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.VM_OBJECT.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.VM_OBJECT.keg.ipers |
items available per-slab |
vm.uma.VM_OBJECT.keg.name |
Keg name |
vm.uma.VM_OBJECT.keg.ppera |
pages per-slab allocation |
vm.uma.VM_OBJECT.keg.reserve |
number of reserved items |
vm.uma.VM_OBJECT.keg.rsize |
Real object size with alignment |
vm.uma.VM_OBJECT.limit |
|
vm.uma.VM_OBJECT.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.VM_OBJECT.limit.items |
Current number of allocated items if limit is set |
vm.uma.VM_OBJECT.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.VM_OBJECT.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.VM_OBJECT.limit.sleeps |
Total zone limit sleeps |
vm.uma.VM_OBJECT.size |
Allocation size |
vm.uma.VM_OBJECT.stats |
|
vm.uma.VM_OBJECT.stats.allocs |
Total allocation calls |
vm.uma.VM_OBJECT.stats.current |
Current number of allocated items |
vm.uma.VM_OBJECT.stats.fails |
Number of allocation failures |
vm.uma.VM_OBJECT.stats.frees |
Total free calls |
vm.uma.VM_OBJECT.stats.xdomain |
Free calls from the wrong domain |
vm.uma.VNODE |
|
vm.uma.VNODE.bucket_size |
Desired per-cpu cache size |
vm.uma.VNODE.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.VNODE.domain |
|
vm.uma.VNODE.domain.[num] |
|
vm.uma.VNODE.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.VNODE.domain.[num].imax |
maximum item count in this period |
vm.uma.VNODE.domain.[num].imin |
minimum item count in this period |
vm.uma.VNODE.domain.[num].limin |
Long time minimum item count |
vm.uma.VNODE.domain.[num].nitems |
number of items in this domain |
vm.uma.VNODE.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.VNODE.domain.[num].wss |
Working set size |
vm.uma.VNODE.flags |
Allocator configuration flags |
vm.uma.VNODE.keg |
|
vm.uma.VNODE.keg.align |
item alignment mask |
vm.uma.VNODE.keg.domain |
|
vm.uma.VNODE.keg.domain.[num] |
|
vm.uma.VNODE.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.VNODE.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.VNODE.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.VNODE.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.VNODE.keg.ipers |
items available per-slab |
vm.uma.VNODE.keg.name |
Keg name |
vm.uma.VNODE.keg.ppera |
pages per-slab allocation |
vm.uma.VNODE.keg.reserve |
number of reserved items |
vm.uma.VNODE.keg.rsize |
Real object size with alignment |
vm.uma.VNODE.limit |
|
vm.uma.VNODE.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.VNODE.limit.items |
Current number of allocated items if limit is set |
vm.uma.VNODE.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.VNODE.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.VNODE.limit.sleeps |
Total zone limit sleeps |
vm.uma.VNODE.size |
Allocation size |
vm.uma.VNODE.stats |
|
vm.uma.VNODE.stats.allocs |
Total allocation calls |
vm.uma.VNODE.stats.current |
Current number of allocated items |
vm.uma.VNODE.stats.fails |
Number of allocation failures |
vm.uma.VNODE.stats.frees |
Total free calls |
vm.uma.VNODE.stats.xdomain |
Free calls from the wrong domain |
vm.uma.[num]_Bucket |
|
vm.uma.[num]_Bucket.bucket_size |
Desired per-cpu cache size |
vm.uma.[num]_Bucket.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.[num]_Bucket.domain |
|
vm.uma.[num]_Bucket.domain.[num] |
|
vm.uma.[num]_Bucket.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.[num]_Bucket.domain.[num].imax |
maximum item count in this period |
vm.uma.[num]_Bucket.domain.[num].imin |
minimum item count in this period |
vm.uma.[num]_Bucket.domain.[num].limin |
Long time minimum item count |
vm.uma.[num]_Bucket.domain.[num].nitems |
number of items in this domain |
vm.uma.[num]_Bucket.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.[num]_Bucket.domain.[num].wss |
Working set size |
vm.uma.[num]_Bucket.flags |
Allocator configuration flags |
vm.uma.[num]_Bucket.keg |
|
vm.uma.[num]_Bucket.keg.align |
item alignment mask |
vm.uma.[num]_Bucket.keg.domain |
|
vm.uma.[num]_Bucket.keg.domain.[num] |
|
vm.uma.[num]_Bucket.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.[num]_Bucket.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.[num]_Bucket.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.[num]_Bucket.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.[num]_Bucket.keg.ipers |
items available per-slab |
vm.uma.[num]_Bucket.keg.name |
Keg name |
vm.uma.[num]_Bucket.keg.ppera |
pages per-slab allocation |
vm.uma.[num]_Bucket.keg.reserve |
number of reserved items |
vm.uma.[num]_Bucket.keg.rsize |
Real object size with alignment |
vm.uma.[num]_Bucket.limit |
|
vm.uma.[num]_Bucket.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.[num]_Bucket.limit.items |
Current number of allocated items if limit is set |
vm.uma.[num]_Bucket.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.[num]_Bucket.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.[num]_Bucket.limit.sleeps |
Total zone limit sleeps |
vm.uma.[num]_Bucket.size |
Allocation size |
vm.uma.[num]_Bucket.stats |
|
vm.uma.[num]_Bucket.stats.allocs |
Total allocation calls |
vm.uma.[num]_Bucket.stats.current |
Current number of allocated items |
vm.uma.[num]_Bucket.stats.fails |
Number of allocation failures |
vm.uma.[num]_Bucket.stats.frees |
Total free calls |
vm.uma.[num]_Bucket.stats.xdomain |
Free calls from the wrong domain |
vm.uma.abd_chunk |
|
vm.uma.abd_chunk.bucket_size |
Desired per-cpu cache size |
vm.uma.abd_chunk.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.abd_chunk.domain |
|
vm.uma.abd_chunk.domain.[num] |
|
vm.uma.abd_chunk.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.abd_chunk.domain.[num].imax |
maximum item count in this period |
vm.uma.abd_chunk.domain.[num].imin |
minimum item count in this period |
vm.uma.abd_chunk.domain.[num].limin |
Long time minimum item count |
vm.uma.abd_chunk.domain.[num].nitems |
number of items in this domain |
vm.uma.abd_chunk.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.abd_chunk.domain.[num].wss |
Working set size |
vm.uma.abd_chunk.flags |
Allocator configuration flags |
vm.uma.abd_chunk.keg |
|
vm.uma.abd_chunk.keg.align |
item alignment mask |
vm.uma.abd_chunk.keg.domain |
|
vm.uma.abd_chunk.keg.domain.[num] |
|
vm.uma.abd_chunk.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.abd_chunk.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.abd_chunk.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.abd_chunk.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.abd_chunk.keg.ipers |
items available per-slab |
vm.uma.abd_chunk.keg.name |
Keg name |
vm.uma.abd_chunk.keg.ppera |
pages per-slab allocation |
vm.uma.abd_chunk.keg.reserve |
number of reserved items |
vm.uma.abd_chunk.keg.rsize |
Real object size with alignment |
vm.uma.abd_chunk.limit |
|
vm.uma.abd_chunk.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.abd_chunk.limit.items |
Current number of allocated items if limit is set |
vm.uma.abd_chunk.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.abd_chunk.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.abd_chunk.limit.sleeps |
Total zone limit sleeps |
vm.uma.abd_chunk.size |
Allocation size |
vm.uma.abd_chunk.stats |
|
vm.uma.abd_chunk.stats.allocs |
Total allocation calls |
vm.uma.abd_chunk.stats.current |
Current number of allocated items |
vm.uma.abd_chunk.stats.fails |
Number of allocation failures |
vm.uma.abd_chunk.stats.frees |
Total free calls |
vm.uma.abd_chunk.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ada_ccb |
|
vm.uma.ada_ccb.bucket_size |
Desired per-cpu cache size |
vm.uma.ada_ccb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ada_ccb.domain |
|
vm.uma.ada_ccb.domain.[num] |
|
vm.uma.ada_ccb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ada_ccb.domain.[num].imax |
maximum item count in this period |
vm.uma.ada_ccb.domain.[num].imin |
minimum item count in this period |
vm.uma.ada_ccb.domain.[num].limin |
Long time minimum item count |
vm.uma.ada_ccb.domain.[num].nitems |
number of items in this domain |
vm.uma.ada_ccb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ada_ccb.domain.[num].wss |
Working set size |
vm.uma.ada_ccb.flags |
Allocator configuration flags |
vm.uma.ada_ccb.keg |
|
vm.uma.ada_ccb.keg.align |
item alignment mask |
vm.uma.ada_ccb.keg.domain |
|
vm.uma.ada_ccb.keg.domain.[num] |
|
vm.uma.ada_ccb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ada_ccb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ada_ccb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ada_ccb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ada_ccb.keg.ipers |
items available per-slab |
vm.uma.ada_ccb.keg.name |
Keg name |
vm.uma.ada_ccb.keg.ppera |
pages per-slab allocation |
vm.uma.ada_ccb.keg.reserve |
number of reserved items |
vm.uma.ada_ccb.keg.rsize |
Real object size with alignment |
vm.uma.ada_ccb.limit |
|
vm.uma.ada_ccb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ada_ccb.limit.items |
Current number of allocated items if limit is set |
vm.uma.ada_ccb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ada_ccb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ada_ccb.limit.sleeps |
Total zone limit sleeps |
vm.uma.ada_ccb.size |
Allocation size |
vm.uma.ada_ccb.stats |
|
vm.uma.ada_ccb.stats.allocs |
Total allocation calls |
vm.uma.ada_ccb.stats.current |
Current number of allocated items |
vm.uma.ada_ccb.stats.fails |
Number of allocation failures |
vm.uma.ada_ccb.stats.frees |
Total free calls |
vm.uma.ada_ccb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.amdgpu_fence |
|
vm.uma.amdgpu_fence.bucket_size |
Desired per-cpu cache size |
vm.uma.amdgpu_fence.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.amdgpu_fence.domain |
|
vm.uma.amdgpu_fence.domain.[num] |
|
vm.uma.amdgpu_fence.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.amdgpu_fence.domain.[num].imax |
maximum item count in this period |
vm.uma.amdgpu_fence.domain.[num].imin |
minimum item count in this period |
vm.uma.amdgpu_fence.domain.[num].limin |
Long time minimum item count |
vm.uma.amdgpu_fence.domain.[num].nitems |
number of items in this domain |
vm.uma.amdgpu_fence.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.amdgpu_fence.domain.[num].wss |
Working set size |
vm.uma.amdgpu_fence.flags |
Allocator configuration flags |
vm.uma.amdgpu_fence.keg |
|
vm.uma.amdgpu_fence.keg.align |
item alignment mask |
vm.uma.amdgpu_fence.keg.domain |
|
vm.uma.amdgpu_fence.keg.domain.[num] |
|
vm.uma.amdgpu_fence.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.amdgpu_fence.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.amdgpu_fence.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.amdgpu_fence.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.amdgpu_fence.keg.ipers |
items available per-slab |
vm.uma.amdgpu_fence.keg.name |
Keg name |
vm.uma.amdgpu_fence.keg.ppera |
pages per-slab allocation |
vm.uma.amdgpu_fence.keg.reserve |
number of reserved items |
vm.uma.amdgpu_fence.keg.rsize |
Real object size with alignment |
vm.uma.amdgpu_fence.limit |
|
vm.uma.amdgpu_fence.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.amdgpu_fence.limit.items |
Current number of allocated items if limit is set |
vm.uma.amdgpu_fence.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.amdgpu_fence.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.amdgpu_fence.limit.sleeps |
Total zone limit sleeps |
vm.uma.amdgpu_fence.size |
Allocation size |
vm.uma.amdgpu_fence.stats |
|
vm.uma.amdgpu_fence.stats.allocs |
Total allocation calls |
vm.uma.amdgpu_fence.stats.current |
Current number of allocated items |
vm.uma.amdgpu_fence.stats.fails |
Number of allocation failures |
vm.uma.amdgpu_fence.stats.frees |
Total free calls |
vm.uma.amdgpu_fence.stats.xdomain |
Free calls from the wrong domain |
vm.uma.amdgpu_sync |
|
vm.uma.amdgpu_sync.bucket_size |
Desired per-cpu cache size |
vm.uma.amdgpu_sync.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.amdgpu_sync.domain |
|
vm.uma.amdgpu_sync.domain.[num] |
|
vm.uma.amdgpu_sync.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.amdgpu_sync.domain.[num].imax |
maximum item count in this period |
vm.uma.amdgpu_sync.domain.[num].imin |
minimum item count in this period |
vm.uma.amdgpu_sync.domain.[num].limin |
Long time minimum item count |
vm.uma.amdgpu_sync.domain.[num].nitems |
number of items in this domain |
vm.uma.amdgpu_sync.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.amdgpu_sync.domain.[num].wss |
Working set size |
vm.uma.amdgpu_sync.flags |
Allocator configuration flags |
vm.uma.amdgpu_sync.keg |
|
vm.uma.amdgpu_sync.keg.align |
item alignment mask |
vm.uma.amdgpu_sync.keg.domain |
|
vm.uma.amdgpu_sync.keg.domain.[num] |
|
vm.uma.amdgpu_sync.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.amdgpu_sync.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.amdgpu_sync.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.amdgpu_sync.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.amdgpu_sync.keg.ipers |
items available per-slab |
vm.uma.amdgpu_sync.keg.name |
Keg name |
vm.uma.amdgpu_sync.keg.ppera |
pages per-slab allocation |
vm.uma.amdgpu_sync.keg.reserve |
number of reserved items |
vm.uma.amdgpu_sync.keg.rsize |
Real object size with alignment |
vm.uma.amdgpu_sync.limit |
|
vm.uma.amdgpu_sync.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.amdgpu_sync.limit.items |
Current number of allocated items if limit is set |
vm.uma.amdgpu_sync.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.amdgpu_sync.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.amdgpu_sync.limit.sleeps |
Total zone limit sleeps |
vm.uma.amdgpu_sync.size |
Allocation size |
vm.uma.amdgpu_sync.stats |
|
vm.uma.amdgpu_sync.stats.allocs |
Total allocation calls |
vm.uma.amdgpu_sync.stats.current |
Current number of allocated items |
vm.uma.amdgpu_sync.stats.fails |
Number of allocation failures |
vm.uma.amdgpu_sync.stats.frees |
Total free calls |
vm.uma.amdgpu_sync.stats.xdomain |
Free calls from the wrong domain |
vm.uma.arc_buf_hdr_t_full |
|
vm.uma.arc_buf_hdr_t_full.bucket_size |
Desired per-cpu cache size |
vm.uma.arc_buf_hdr_t_full.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.arc_buf_hdr_t_full.domain |
|
vm.uma.arc_buf_hdr_t_full.domain.[num] |
|
vm.uma.arc_buf_hdr_t_full.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.arc_buf_hdr_t_full.domain.[num].imax |
maximum item count in this period |
vm.uma.arc_buf_hdr_t_full.domain.[num].imin |
minimum item count in this period |
vm.uma.arc_buf_hdr_t_full.domain.[num].limin |
Long time minimum item count |
vm.uma.arc_buf_hdr_t_full.domain.[num].nitems |
number of items in this domain |
vm.uma.arc_buf_hdr_t_full.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.arc_buf_hdr_t_full.domain.[num].wss |
Working set size |
vm.uma.arc_buf_hdr_t_full.flags |
Allocator configuration flags |
vm.uma.arc_buf_hdr_t_full.keg |
|
vm.uma.arc_buf_hdr_t_full.keg.align |
item alignment mask |
vm.uma.arc_buf_hdr_t_full.keg.domain |
|
vm.uma.arc_buf_hdr_t_full.keg.domain.[num] |
|
vm.uma.arc_buf_hdr_t_full.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.arc_buf_hdr_t_full.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.arc_buf_hdr_t_full.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.arc_buf_hdr_t_full.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.arc_buf_hdr_t_full.keg.ipers |
items available per-slab |
vm.uma.arc_buf_hdr_t_full.keg.name |
Keg name |
vm.uma.arc_buf_hdr_t_full.keg.ppera |
pages per-slab allocation |
vm.uma.arc_buf_hdr_t_full.keg.reserve |
number of reserved items |
vm.uma.arc_buf_hdr_t_full.keg.rsize |
Real object size with alignment |
vm.uma.arc_buf_hdr_t_full.limit |
|
vm.uma.arc_buf_hdr_t_full.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.arc_buf_hdr_t_full.limit.items |
Current number of allocated items if limit is set |
vm.uma.arc_buf_hdr_t_full.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.arc_buf_hdr_t_full.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.arc_buf_hdr_t_full.limit.sleeps |
Total zone limit sleeps |
vm.uma.arc_buf_hdr_t_full.size |
Allocation size |
vm.uma.arc_buf_hdr_t_full.stats |
|
vm.uma.arc_buf_hdr_t_full.stats.allocs |
Total allocation calls |
vm.uma.arc_buf_hdr_t_full.stats.current |
Current number of allocated items |
vm.uma.arc_buf_hdr_t_full.stats.fails |
Number of allocation failures |
vm.uma.arc_buf_hdr_t_full.stats.frees |
Total free calls |
vm.uma.arc_buf_hdr_t_full.stats.xdomain |
Free calls from the wrong domain |
vm.uma.arc_buf_hdr_t_l[num]only |
|
vm.uma.arc_buf_hdr_t_l[num]only.bucket_size |
Desired per-cpu cache size |
vm.uma.arc_buf_hdr_t_l[num]only.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.arc_buf_hdr_t_l[num]only.domain |
|
vm.uma.arc_buf_hdr_t_l[num]only.domain.[num] |
|
vm.uma.arc_buf_hdr_t_l[num]only.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.arc_buf_hdr_t_l[num]only.domain.[num].imax |
maximum item count in this period |
vm.uma.arc_buf_hdr_t_l[num]only.domain.[num].imin |
minimum item count in this period |
vm.uma.arc_buf_hdr_t_l[num]only.domain.[num].limin |
Long time minimum item count |
vm.uma.arc_buf_hdr_t_l[num]only.domain.[num].nitems |
number of items in this domain |
vm.uma.arc_buf_hdr_t_l[num]only.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.arc_buf_hdr_t_l[num]only.domain.[num].wss |
Working set size |
vm.uma.arc_buf_hdr_t_l[num]only.flags |
Allocator configuration flags |
vm.uma.arc_buf_hdr_t_l[num]only.keg |
|
vm.uma.arc_buf_hdr_t_l[num]only.keg.align |
item alignment mask |
vm.uma.arc_buf_hdr_t_l[num]only.keg.domain |
|
vm.uma.arc_buf_hdr_t_l[num]only.keg.domain.[num] |
|
vm.uma.arc_buf_hdr_t_l[num]only.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.arc_buf_hdr_t_l[num]only.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.arc_buf_hdr_t_l[num]only.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.arc_buf_hdr_t_l[num]only.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.arc_buf_hdr_t_l[num]only.keg.ipers |
items available per-slab |
vm.uma.arc_buf_hdr_t_l[num]only.keg.name |
Keg name |
vm.uma.arc_buf_hdr_t_l[num]only.keg.ppera |
pages per-slab allocation |
vm.uma.arc_buf_hdr_t_l[num]only.keg.reserve |
number of reserved items |
vm.uma.arc_buf_hdr_t_l[num]only.keg.rsize |
Real object size with alignment |
vm.uma.arc_buf_hdr_t_l[num]only.limit |
|
vm.uma.arc_buf_hdr_t_l[num]only.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.arc_buf_hdr_t_l[num]only.limit.items |
Current number of allocated items if limit is set |
vm.uma.arc_buf_hdr_t_l[num]only.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.arc_buf_hdr_t_l[num]only.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.arc_buf_hdr_t_l[num]only.limit.sleeps |
Total zone limit sleeps |
vm.uma.arc_buf_hdr_t_l[num]only.size |
Allocation size |
vm.uma.arc_buf_hdr_t_l[num]only.stats |
|
vm.uma.arc_buf_hdr_t_l[num]only.stats.allocs |
Total allocation calls |
vm.uma.arc_buf_hdr_t_l[num]only.stats.current |
Current number of allocated items |
vm.uma.arc_buf_hdr_t_l[num]only.stats.fails |
Number of allocation failures |
vm.uma.arc_buf_hdr_t_l[num]only.stats.frees |
Total free calls |
vm.uma.arc_buf_hdr_t_l[num]only.stats.xdomain |
Free calls from the wrong domain |
vm.uma.arc_buf_t |
|
vm.uma.arc_buf_t.bucket_size |
Desired per-cpu cache size |
vm.uma.arc_buf_t.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.arc_buf_t.domain |
|
vm.uma.arc_buf_t.domain.[num] |
|
vm.uma.arc_buf_t.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.arc_buf_t.domain.[num].imax |
maximum item count in this period |
vm.uma.arc_buf_t.domain.[num].imin |
minimum item count in this period |
vm.uma.arc_buf_t.domain.[num].limin |
Long time minimum item count |
vm.uma.arc_buf_t.domain.[num].nitems |
number of items in this domain |
vm.uma.arc_buf_t.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.arc_buf_t.domain.[num].wss |
Working set size |
vm.uma.arc_buf_t.flags |
Allocator configuration flags |
vm.uma.arc_buf_t.keg |
|
vm.uma.arc_buf_t.keg.align |
item alignment mask |
vm.uma.arc_buf_t.keg.domain |
|
vm.uma.arc_buf_t.keg.domain.[num] |
|
vm.uma.arc_buf_t.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.arc_buf_t.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.arc_buf_t.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.arc_buf_t.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.arc_buf_t.keg.ipers |
items available per-slab |
vm.uma.arc_buf_t.keg.name |
Keg name |
vm.uma.arc_buf_t.keg.ppera |
pages per-slab allocation |
vm.uma.arc_buf_t.keg.reserve |
number of reserved items |
vm.uma.arc_buf_t.keg.rsize |
Real object size with alignment |
vm.uma.arc_buf_t.limit |
|
vm.uma.arc_buf_t.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.arc_buf_t.limit.items |
Current number of allocated items if limit is set |
vm.uma.arc_buf_t.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.arc_buf_t.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.arc_buf_t.limit.sleeps |
Total zone limit sleeps |
vm.uma.arc_buf_t.size |
Allocation size |
vm.uma.arc_buf_t.stats |
|
vm.uma.arc_buf_t.stats.allocs |
Total allocation calls |
vm.uma.arc_buf_t.stats.current |
Current number of allocated items |
vm.uma.arc_buf_t.stats.fails |
Number of allocation failures |
vm.uma.arc_buf_t.stats.frees |
Total free calls |
vm.uma.arc_buf_t.stats.xdomain |
Free calls from the wrong domain |
vm.uma.audit_record |
|
vm.uma.audit_record.bucket_size |
Desired per-cpu cache size |
vm.uma.audit_record.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.audit_record.domain |
|
vm.uma.audit_record.domain.[num] |
|
vm.uma.audit_record.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.audit_record.domain.[num].imax |
maximum item count in this period |
vm.uma.audit_record.domain.[num].imin |
minimum item count in this period |
vm.uma.audit_record.domain.[num].limin |
Long time minimum item count |
vm.uma.audit_record.domain.[num].nitems |
number of items in this domain |
vm.uma.audit_record.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.audit_record.domain.[num].wss |
Working set size |
vm.uma.audit_record.flags |
Allocator configuration flags |
vm.uma.audit_record.keg |
|
vm.uma.audit_record.keg.align |
item alignment mask |
vm.uma.audit_record.keg.domain |
|
vm.uma.audit_record.keg.domain.[num] |
|
vm.uma.audit_record.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.audit_record.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.audit_record.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.audit_record.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.audit_record.keg.ipers |
items available per-slab |
vm.uma.audit_record.keg.name |
Keg name |
vm.uma.audit_record.keg.ppera |
pages per-slab allocation |
vm.uma.audit_record.keg.reserve |
number of reserved items |
vm.uma.audit_record.keg.rsize |
Real object size with alignment |
vm.uma.audit_record.limit |
|
vm.uma.audit_record.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.audit_record.limit.items |
Current number of allocated items if limit is set |
vm.uma.audit_record.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.audit_record.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.audit_record.limit.sleeps |
Total zone limit sleeps |
vm.uma.audit_record.size |
Allocation size |
vm.uma.audit_record.stats |
|
vm.uma.audit_record.stats.allocs |
Total allocation calls |
vm.uma.audit_record.stats.current |
Current number of allocated items |
vm.uma.audit_record.stats.fails |
Number of allocation failures |
vm.uma.audit_record.stats.frees |
Total free calls |
vm.uma.audit_record.stats.xdomain |
Free calls from the wrong domain |
vm.uma.autofs_node |
|
vm.uma.autofs_node.bucket_size |
Desired per-cpu cache size |
vm.uma.autofs_node.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.autofs_node.domain |
|
vm.uma.autofs_node.domain.[num] |
|
vm.uma.autofs_node.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.autofs_node.domain.[num].imax |
maximum item count in this period |
vm.uma.autofs_node.domain.[num].imin |
minimum item count in this period |
vm.uma.autofs_node.domain.[num].limin |
Long time minimum item count |
vm.uma.autofs_node.domain.[num].nitems |
number of items in this domain |
vm.uma.autofs_node.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.autofs_node.domain.[num].wss |
Working set size |
vm.uma.autofs_node.flags |
Allocator configuration flags |
vm.uma.autofs_node.keg |
|
vm.uma.autofs_node.keg.align |
item alignment mask |
vm.uma.autofs_node.keg.domain |
|
vm.uma.autofs_node.keg.domain.[num] |
|
vm.uma.autofs_node.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.autofs_node.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.autofs_node.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.autofs_node.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.autofs_node.keg.ipers |
items available per-slab |
vm.uma.autofs_node.keg.name |
Keg name |
vm.uma.autofs_node.keg.ppera |
pages per-slab allocation |
vm.uma.autofs_node.keg.reserve |
number of reserved items |
vm.uma.autofs_node.keg.rsize |
Real object size with alignment |
vm.uma.autofs_node.limit |
|
vm.uma.autofs_node.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.autofs_node.limit.items |
Current number of allocated items if limit is set |
vm.uma.autofs_node.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.autofs_node.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.autofs_node.limit.sleeps |
Total zone limit sleeps |
vm.uma.autofs_node.size |
Allocation size |
vm.uma.autofs_node.stats |
|
vm.uma.autofs_node.stats.allocs |
Total allocation calls |
vm.uma.autofs_node.stats.current |
Current number of allocated items |
vm.uma.autofs_node.stats.fails |
Number of allocation failures |
vm.uma.autofs_node.stats.frees |
Total free calls |
vm.uma.autofs_node.stats.xdomain |
Free calls from the wrong domain |
vm.uma.autofs_request |
|
vm.uma.autofs_request.bucket_size |
Desired per-cpu cache size |
vm.uma.autofs_request.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.autofs_request.domain |
|
vm.uma.autofs_request.domain.[num] |
|
vm.uma.autofs_request.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.autofs_request.domain.[num].imax |
maximum item count in this period |
vm.uma.autofs_request.domain.[num].imin |
minimum item count in this period |
vm.uma.autofs_request.domain.[num].limin |
Long time minimum item count |
vm.uma.autofs_request.domain.[num].nitems |
number of items in this domain |
vm.uma.autofs_request.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.autofs_request.domain.[num].wss |
Working set size |
vm.uma.autofs_request.flags |
Allocator configuration flags |
vm.uma.autofs_request.keg |
|
vm.uma.autofs_request.keg.align |
item alignment mask |
vm.uma.autofs_request.keg.domain |
|
vm.uma.autofs_request.keg.domain.[num] |
|
vm.uma.autofs_request.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.autofs_request.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.autofs_request.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.autofs_request.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.autofs_request.keg.ipers |
items available per-slab |
vm.uma.autofs_request.keg.name |
Keg name |
vm.uma.autofs_request.keg.ppera |
pages per-slab allocation |
vm.uma.autofs_request.keg.reserve |
number of reserved items |
vm.uma.autofs_request.keg.rsize |
Real object size with alignment |
vm.uma.autofs_request.limit |
|
vm.uma.autofs_request.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.autofs_request.limit.items |
Current number of allocated items if limit is set |
vm.uma.autofs_request.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.autofs_request.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.autofs_request.limit.sleeps |
Total zone limit sleeps |
vm.uma.autofs_request.size |
Allocation size |
vm.uma.autofs_request.stats |
|
vm.uma.autofs_request.stats.allocs |
Total allocation calls |
vm.uma.autofs_request.stats.current |
Current number of allocated items |
vm.uma.autofs_request.stats.fails |
Number of allocation failures |
vm.uma.autofs_request.stats.frees |
Total free calls |
vm.uma.autofs_request.stats.xdomain |
Free calls from the wrong domain |
vm.uma.bridge_rtnode |
|
vm.uma.bridge_rtnode.bucket_size |
Desired per-cpu cache size |
vm.uma.bridge_rtnode.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.bridge_rtnode.domain |
|
vm.uma.bridge_rtnode.domain.[num] |
|
vm.uma.bridge_rtnode.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.bridge_rtnode.domain.[num].imax |
maximum item count in this period |
vm.uma.bridge_rtnode.domain.[num].imin |
minimum item count in this period |
vm.uma.bridge_rtnode.domain.[num].limin |
Long time minimum item count |
vm.uma.bridge_rtnode.domain.[num].nitems |
number of items in this domain |
vm.uma.bridge_rtnode.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.bridge_rtnode.domain.[num].wss |
Working set size |
vm.uma.bridge_rtnode.flags |
Allocator configuration flags |
vm.uma.bridge_rtnode.keg |
|
vm.uma.bridge_rtnode.keg.align |
item alignment mask |
vm.uma.bridge_rtnode.keg.domain |
|
vm.uma.bridge_rtnode.keg.domain.[num] |
|
vm.uma.bridge_rtnode.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.bridge_rtnode.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.bridge_rtnode.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.bridge_rtnode.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.bridge_rtnode.keg.ipers |
items available per-slab |
vm.uma.bridge_rtnode.keg.name |
Keg name |
vm.uma.bridge_rtnode.keg.ppera |
pages per-slab allocation |
vm.uma.bridge_rtnode.keg.reserve |
number of reserved items |
vm.uma.bridge_rtnode.keg.rsize |
Real object size with alignment |
vm.uma.bridge_rtnode.limit |
|
vm.uma.bridge_rtnode.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.bridge_rtnode.limit.items |
Current number of allocated items if limit is set |
vm.uma.bridge_rtnode.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.bridge_rtnode.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.bridge_rtnode.limit.sleeps |
Total zone limit sleeps |
vm.uma.bridge_rtnode.size |
Allocation size |
vm.uma.bridge_rtnode.stats |
|
vm.uma.bridge_rtnode.stats.allocs |
Total allocation calls |
vm.uma.bridge_rtnode.stats.current |
Current number of allocated items |
vm.uma.bridge_rtnode.stats.fails |
Number of allocation failures |
vm.uma.bridge_rtnode.stats.frees |
Total free calls |
vm.uma.bridge_rtnode.stats.xdomain |
Free calls from the wrong domain |
vm.uma.bridge_rtnode_[num] |
|
vm.uma.bridge_rtnode_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.bridge_rtnode_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.bridge_rtnode_[num].domain |
|
vm.uma.bridge_rtnode_[num].domain.[num] |
|
vm.uma.bridge_rtnode_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.bridge_rtnode_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.bridge_rtnode_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.bridge_rtnode_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.bridge_rtnode_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.bridge_rtnode_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.bridge_rtnode_[num].domain.[num].wss |
Working set size |
vm.uma.bridge_rtnode_[num].flags |
Allocator configuration flags |
vm.uma.bridge_rtnode_[num].keg |
|
vm.uma.bridge_rtnode_[num].keg.align |
item alignment mask |
vm.uma.bridge_rtnode_[num].keg.domain |
|
vm.uma.bridge_rtnode_[num].keg.domain.[num] |
|
vm.uma.bridge_rtnode_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.bridge_rtnode_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.bridge_rtnode_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.bridge_rtnode_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.bridge_rtnode_[num].keg.ipers |
items available per-slab |
vm.uma.bridge_rtnode_[num].keg.name |
Keg name |
vm.uma.bridge_rtnode_[num].keg.ppera |
pages per-slab allocation |
vm.uma.bridge_rtnode_[num].keg.reserve |
number of reserved items |
vm.uma.bridge_rtnode_[num].keg.rsize |
Real object size with alignment |
vm.uma.bridge_rtnode_[num].limit |
|
vm.uma.bridge_rtnode_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.bridge_rtnode_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.bridge_rtnode_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.bridge_rtnode_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.bridge_rtnode_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.bridge_rtnode_[num].size |
Allocation size |
vm.uma.bridge_rtnode_[num].stats |
|
vm.uma.bridge_rtnode_[num].stats.allocs |
Total allocation calls |
vm.uma.bridge_rtnode_[num].stats.current |
Current number of allocated items |
vm.uma.bridge_rtnode_[num].stats.fails |
Number of allocation failures |
vm.uma.bridge_rtnode_[num].stats.frees |
Total free calls |
vm.uma.bridge_rtnode_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.brt_entry_cache |
|
vm.uma.brt_entry_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.brt_entry_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.brt_entry_cache.domain |
|
vm.uma.brt_entry_cache.domain.[num] |
|
vm.uma.brt_entry_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.brt_entry_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.brt_entry_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.brt_entry_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.brt_entry_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.brt_entry_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.brt_entry_cache.domain.[num].wss |
Working set size |
vm.uma.brt_entry_cache.flags |
Allocator configuration flags |
vm.uma.brt_entry_cache.keg |
|
vm.uma.brt_entry_cache.keg.align |
item alignment mask |
vm.uma.brt_entry_cache.keg.domain |
|
vm.uma.brt_entry_cache.keg.domain.[num] |
|
vm.uma.brt_entry_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.brt_entry_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.brt_entry_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.brt_entry_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.brt_entry_cache.keg.ipers |
items available per-slab |
vm.uma.brt_entry_cache.keg.name |
Keg name |
vm.uma.brt_entry_cache.keg.ppera |
pages per-slab allocation |
vm.uma.brt_entry_cache.keg.reserve |
number of reserved items |
vm.uma.brt_entry_cache.keg.rsize |
Real object size with alignment |
vm.uma.brt_entry_cache.limit |
|
vm.uma.brt_entry_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.brt_entry_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.brt_entry_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.brt_entry_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.brt_entry_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.brt_entry_cache.size |
Allocation size |
vm.uma.brt_entry_cache.stats |
|
vm.uma.brt_entry_cache.stats.allocs |
Total allocation calls |
vm.uma.brt_entry_cache.stats.current |
Current number of allocated items |
vm.uma.brt_entry_cache.stats.fails |
Number of allocation failures |
vm.uma.brt_entry_cache.stats.frees |
Total free calls |
vm.uma.brt_entry_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.brt_pending_entry_cache |
|
vm.uma.brt_pending_entry_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.brt_pending_entry_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.brt_pending_entry_cache.domain |
|
vm.uma.brt_pending_entry_cache.domain.[num] |
|
vm.uma.brt_pending_entry_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.brt_pending_entry_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.brt_pending_entry_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.brt_pending_entry_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.brt_pending_entry_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.brt_pending_entry_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.brt_pending_entry_cache.domain.[num].wss |
Working set size |
vm.uma.brt_pending_entry_cache.flags |
Allocator configuration flags |
vm.uma.brt_pending_entry_cache.keg |
|
vm.uma.brt_pending_entry_cache.keg.align |
item alignment mask |
vm.uma.brt_pending_entry_cache.keg.domain |
|
vm.uma.brt_pending_entry_cache.keg.domain.[num] |
|
vm.uma.brt_pending_entry_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.brt_pending_entry_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.brt_pending_entry_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.brt_pending_entry_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.brt_pending_entry_cache.keg.ipers |
items available per-slab |
vm.uma.brt_pending_entry_cache.keg.name |
Keg name |
vm.uma.brt_pending_entry_cache.keg.ppera |
pages per-slab allocation |
vm.uma.brt_pending_entry_cache.keg.reserve |
number of reserved items |
vm.uma.brt_pending_entry_cache.keg.rsize |
Real object size with alignment |
vm.uma.brt_pending_entry_cache.limit |
|
vm.uma.brt_pending_entry_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.brt_pending_entry_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.brt_pending_entry_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.brt_pending_entry_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.brt_pending_entry_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.brt_pending_entry_cache.size |
Allocation size |
vm.uma.brt_pending_entry_cache.stats |
|
vm.uma.brt_pending_entry_cache.stats.allocs |
Total allocation calls |
vm.uma.brt_pending_entry_cache.stats.current |
Current number of allocated items |
vm.uma.brt_pending_entry_cache.stats.fails |
Number of allocation failures |
vm.uma.brt_pending_entry_cache.stats.frees |
Total free calls |
vm.uma.brt_pending_entry_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.buf_free_cache |
|
vm.uma.buf_free_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.buf_free_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.buf_free_cache.domain |
|
vm.uma.buf_free_cache.domain.[num] |
|
vm.uma.buf_free_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.buf_free_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.buf_free_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.buf_free_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.buf_free_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.buf_free_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.buf_free_cache.domain.[num].wss |
Working set size |
vm.uma.buf_free_cache.flags |
Allocator configuration flags |
vm.uma.buf_free_cache.keg |
|
vm.uma.buf_free_cache.keg.name |
Keg name |
vm.uma.buf_free_cache.limit |
|
vm.uma.buf_free_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.buf_free_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.buf_free_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.buf_free_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.buf_free_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.buf_free_cache.size |
Allocation size |
vm.uma.buf_free_cache.stats |
|
vm.uma.buf_free_cache.stats.allocs |
Total allocation calls |
vm.uma.buf_free_cache.stats.current |
Current number of allocated items |
vm.uma.buf_free_cache.stats.fails |
Number of allocation failures |
vm.uma.buf_free_cache.stats.frees |
Total free calls |
vm.uma.buf_free_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.buffer_arena_[num] |
|
vm.uma.buffer_arena_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.buffer_arena_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.buffer_arena_[num].domain |
|
vm.uma.buffer_arena_[num].domain.[num] |
|
vm.uma.buffer_arena_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.buffer_arena_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.buffer_arena_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.buffer_arena_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.buffer_arena_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.buffer_arena_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.buffer_arena_[num].domain.[num].wss |
Working set size |
vm.uma.buffer_arena_[num].flags |
Allocator configuration flags |
vm.uma.buffer_arena_[num].keg |
|
vm.uma.buffer_arena_[num].keg.name |
Keg name |
vm.uma.buffer_arena_[num].limit |
|
vm.uma.buffer_arena_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.buffer_arena_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.buffer_arena_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.buffer_arena_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.buffer_arena_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.buffer_arena_[num].size |
Allocation size |
vm.uma.buffer_arena_[num].stats |
|
vm.uma.buffer_arena_[num].stats.allocs |
Total allocation calls |
vm.uma.buffer_arena_[num].stats.current |
Current number of allocated items |
vm.uma.buffer_arena_[num].stats.fails |
Number of allocation failures |
vm.uma.buffer_arena_[num].stats.frees |
Total free calls |
vm.uma.buffer_arena_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.clpbuf |
|
vm.uma.clpbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.clpbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.clpbuf.domain |
|
vm.uma.clpbuf.domain.[num] |
|
vm.uma.clpbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.clpbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.clpbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.clpbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.clpbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.clpbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.clpbuf.domain.[num].wss |
Working set size |
vm.uma.clpbuf.flags |
Allocator configuration flags |
vm.uma.clpbuf.keg |
|
vm.uma.clpbuf.keg.align |
item alignment mask |
vm.uma.clpbuf.keg.domain |
|
vm.uma.clpbuf.keg.domain.[num] |
|
vm.uma.clpbuf.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.clpbuf.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.clpbuf.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.clpbuf.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.clpbuf.keg.ipers |
items available per-slab |
vm.uma.clpbuf.keg.name |
Keg name |
vm.uma.clpbuf.keg.ppera |
pages per-slab allocation |
vm.uma.clpbuf.keg.reserve |
number of reserved items |
vm.uma.clpbuf.keg.rsize |
Real object size with alignment |
vm.uma.clpbuf.limit |
|
vm.uma.clpbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.clpbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.clpbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.clpbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.clpbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.clpbuf.size |
Allocation size |
vm.uma.clpbuf.stats |
|
vm.uma.clpbuf.stats.allocs |
Total allocation calls |
vm.uma.clpbuf.stats.current |
Current number of allocated items |
vm.uma.clpbuf.stats.fails |
Number of allocation failures |
vm.uma.clpbuf.stats.frees |
Total free calls |
vm.uma.clpbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.cpuset |
|
vm.uma.cpuset.bucket_size |
Desired per-cpu cache size |
vm.uma.cpuset.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.cpuset.domain |
|
vm.uma.cpuset.domain.[num] |
|
vm.uma.cpuset.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.cpuset.domain.[num].imax |
maximum item count in this period |
vm.uma.cpuset.domain.[num].imin |
minimum item count in this period |
vm.uma.cpuset.domain.[num].limin |
Long time minimum item count |
vm.uma.cpuset.domain.[num].nitems |
number of items in this domain |
vm.uma.cpuset.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.cpuset.domain.[num].wss |
Working set size |
vm.uma.cpuset.flags |
Allocator configuration flags |
vm.uma.cpuset.keg |
|
vm.uma.cpuset.keg.align |
item alignment mask |
vm.uma.cpuset.keg.domain |
|
vm.uma.cpuset.keg.domain.[num] |
|
vm.uma.cpuset.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.cpuset.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.cpuset.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.cpuset.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.cpuset.keg.ipers |
items available per-slab |
vm.uma.cpuset.keg.name |
Keg name |
vm.uma.cpuset.keg.ppera |
pages per-slab allocation |
vm.uma.cpuset.keg.reserve |
number of reserved items |
vm.uma.cpuset.keg.rsize |
Real object size with alignment |
vm.uma.cpuset.limit |
|
vm.uma.cpuset.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.cpuset.limit.items |
Current number of allocated items if limit is set |
vm.uma.cpuset.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.cpuset.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.cpuset.limit.sleeps |
Total zone limit sleeps |
vm.uma.cpuset.size |
Allocation size |
vm.uma.cpuset.stats |
|
vm.uma.cpuset.stats.allocs |
Total allocation calls |
vm.uma.cpuset.stats.current |
Current number of allocated items |
vm.uma.cpuset.stats.fails |
Number of allocation failures |
vm.uma.cpuset.stats.frees |
Total free calls |
vm.uma.cpuset.stats.xdomain |
Free calls from the wrong domain |
vm.uma.cryptop |
|
vm.uma.cryptop.bucket_size |
Desired per-cpu cache size |
vm.uma.cryptop.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.cryptop.domain |
|
vm.uma.cryptop.domain.[num] |
|
vm.uma.cryptop.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.cryptop.domain.[num].imax |
maximum item count in this period |
vm.uma.cryptop.domain.[num].imin |
minimum item count in this period |
vm.uma.cryptop.domain.[num].limin |
Long time minimum item count |
vm.uma.cryptop.domain.[num].nitems |
number of items in this domain |
vm.uma.cryptop.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.cryptop.domain.[num].wss |
Working set size |
vm.uma.cryptop.flags |
Allocator configuration flags |
vm.uma.cryptop.keg |
|
vm.uma.cryptop.keg.align |
item alignment mask |
vm.uma.cryptop.keg.domain |
|
vm.uma.cryptop.keg.domain.[num] |
|
vm.uma.cryptop.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.cryptop.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.cryptop.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.cryptop.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.cryptop.keg.ipers |
items available per-slab |
vm.uma.cryptop.keg.name |
Keg name |
vm.uma.cryptop.keg.ppera |
pages per-slab allocation |
vm.uma.cryptop.keg.reserve |
number of reserved items |
vm.uma.cryptop.keg.rsize |
Real object size with alignment |
vm.uma.cryptop.limit |
|
vm.uma.cryptop.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.cryptop.limit.items |
Current number of allocated items if limit is set |
vm.uma.cryptop.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.cryptop.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.cryptop.limit.sleeps |
Total zone limit sleeps |
vm.uma.cryptop.size |
Allocation size |
vm.uma.cryptop.stats |
|
vm.uma.cryptop.stats.allocs |
Total allocation calls |
vm.uma.cryptop.stats.current |
Current number of allocated items |
vm.uma.cryptop.stats.fails |
Number of allocation failures |
vm.uma.cryptop.stats.frees |
Total free calls |
vm.uma.cryptop.stats.xdomain |
Free calls from the wrong domain |
vm.uma.da_ccb |
|
vm.uma.da_ccb.bucket_size |
Desired per-cpu cache size |
vm.uma.da_ccb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.da_ccb.domain |
|
vm.uma.da_ccb.domain.[num] |
|
vm.uma.da_ccb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.da_ccb.domain.[num].imax |
maximum item count in this period |
vm.uma.da_ccb.domain.[num].imin |
minimum item count in this period |
vm.uma.da_ccb.domain.[num].limin |
Long time minimum item count |
vm.uma.da_ccb.domain.[num].nitems |
number of items in this domain |
vm.uma.da_ccb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.da_ccb.domain.[num].wss |
Working set size |
vm.uma.da_ccb.flags |
Allocator configuration flags |
vm.uma.da_ccb.keg |
|
vm.uma.da_ccb.keg.align |
item alignment mask |
vm.uma.da_ccb.keg.domain |
|
vm.uma.da_ccb.keg.domain.[num] |
|
vm.uma.da_ccb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.da_ccb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.da_ccb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.da_ccb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.da_ccb.keg.ipers |
items available per-slab |
vm.uma.da_ccb.keg.name |
Keg name |
vm.uma.da_ccb.keg.ppera |
pages per-slab allocation |
vm.uma.da_ccb.keg.reserve |
number of reserved items |
vm.uma.da_ccb.keg.rsize |
Real object size with alignment |
vm.uma.da_ccb.limit |
|
vm.uma.da_ccb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.da_ccb.limit.items |
Current number of allocated items if limit is set |
vm.uma.da_ccb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.da_ccb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.da_ccb.limit.sleeps |
Total zone limit sleeps |
vm.uma.da_ccb.size |
Allocation size |
vm.uma.da_ccb.stats |
|
vm.uma.da_ccb.stats.allocs |
Total allocation calls |
vm.uma.da_ccb.stats.current |
Current number of allocated items |
vm.uma.da_ccb.stats.fails |
Number of allocation failures |
vm.uma.da_ccb.stats.frees |
Total free calls |
vm.uma.da_ccb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.dbuf_dirty_record_t |
|
vm.uma.dbuf_dirty_record_t.bucket_size |
Desired per-cpu cache size |
vm.uma.dbuf_dirty_record_t.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.dbuf_dirty_record_t.domain |
|
vm.uma.dbuf_dirty_record_t.domain.[num] |
|
vm.uma.dbuf_dirty_record_t.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.dbuf_dirty_record_t.domain.[num].imax |
maximum item count in this period |
vm.uma.dbuf_dirty_record_t.domain.[num].imin |
minimum item count in this period |
vm.uma.dbuf_dirty_record_t.domain.[num].limin |
Long time minimum item count |
vm.uma.dbuf_dirty_record_t.domain.[num].nitems |
number of items in this domain |
vm.uma.dbuf_dirty_record_t.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.dbuf_dirty_record_t.domain.[num].wss |
Working set size |
vm.uma.dbuf_dirty_record_t.flags |
Allocator configuration flags |
vm.uma.dbuf_dirty_record_t.keg |
|
vm.uma.dbuf_dirty_record_t.keg.align |
item alignment mask |
vm.uma.dbuf_dirty_record_t.keg.domain |
|
vm.uma.dbuf_dirty_record_t.keg.domain.[num] |
|
vm.uma.dbuf_dirty_record_t.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.dbuf_dirty_record_t.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.dbuf_dirty_record_t.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.dbuf_dirty_record_t.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.dbuf_dirty_record_t.keg.ipers |
items available per-slab |
vm.uma.dbuf_dirty_record_t.keg.name |
Keg name |
vm.uma.dbuf_dirty_record_t.keg.ppera |
pages per-slab allocation |
vm.uma.dbuf_dirty_record_t.keg.reserve |
number of reserved items |
vm.uma.dbuf_dirty_record_t.keg.rsize |
Real object size with alignment |
vm.uma.dbuf_dirty_record_t.limit |
|
vm.uma.dbuf_dirty_record_t.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.dbuf_dirty_record_t.limit.items |
Current number of allocated items if limit is set |
vm.uma.dbuf_dirty_record_t.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.dbuf_dirty_record_t.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.dbuf_dirty_record_t.limit.sleeps |
Total zone limit sleeps |
vm.uma.dbuf_dirty_record_t.size |
Allocation size |
vm.uma.dbuf_dirty_record_t.stats |
|
vm.uma.dbuf_dirty_record_t.stats.allocs |
Total allocation calls |
vm.uma.dbuf_dirty_record_t.stats.current |
Current number of allocated items |
vm.uma.dbuf_dirty_record_t.stats.fails |
Number of allocation failures |
vm.uma.dbuf_dirty_record_t.stats.frees |
Total free calls |
vm.uma.dbuf_dirty_record_t.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ddt_cache |
|
vm.uma.ddt_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.ddt_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ddt_cache.domain |
|
vm.uma.ddt_cache.domain.[num] |
|
vm.uma.ddt_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ddt_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.ddt_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.ddt_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.ddt_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.ddt_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ddt_cache.domain.[num].wss |
Working set size |
vm.uma.ddt_cache.flags |
Allocator configuration flags |
vm.uma.ddt_cache.keg |
|
vm.uma.ddt_cache.keg.align |
item alignment mask |
vm.uma.ddt_cache.keg.domain |
|
vm.uma.ddt_cache.keg.domain.[num] |
|
vm.uma.ddt_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ddt_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ddt_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ddt_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ddt_cache.keg.ipers |
items available per-slab |
vm.uma.ddt_cache.keg.name |
Keg name |
vm.uma.ddt_cache.keg.ppera |
pages per-slab allocation |
vm.uma.ddt_cache.keg.reserve |
number of reserved items |
vm.uma.ddt_cache.keg.rsize |
Real object size with alignment |
vm.uma.ddt_cache.limit |
|
vm.uma.ddt_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ddt_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.ddt_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ddt_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ddt_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.ddt_cache.size |
Allocation size |
vm.uma.ddt_cache.stats |
|
vm.uma.ddt_cache.stats.allocs |
Total allocation calls |
vm.uma.ddt_cache.stats.current |
Current number of allocated items |
vm.uma.ddt_cache.stats.fails |
Number of allocation failures |
vm.uma.ddt_cache.stats.frees |
Total free calls |
vm.uma.ddt_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ddt_entry_cache |
|
vm.uma.ddt_entry_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.ddt_entry_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ddt_entry_cache.domain |
|
vm.uma.ddt_entry_cache.domain.[num] |
|
vm.uma.ddt_entry_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ddt_entry_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.ddt_entry_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.ddt_entry_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.ddt_entry_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.ddt_entry_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ddt_entry_cache.domain.[num].wss |
Working set size |
vm.uma.ddt_entry_cache.flags |
Allocator configuration flags |
vm.uma.ddt_entry_cache.keg |
|
vm.uma.ddt_entry_cache.keg.align |
item alignment mask |
vm.uma.ddt_entry_cache.keg.domain |
|
vm.uma.ddt_entry_cache.keg.domain.[num] |
|
vm.uma.ddt_entry_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ddt_entry_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ddt_entry_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ddt_entry_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ddt_entry_cache.keg.ipers |
items available per-slab |
vm.uma.ddt_entry_cache.keg.name |
Keg name |
vm.uma.ddt_entry_cache.keg.ppera |
pages per-slab allocation |
vm.uma.ddt_entry_cache.keg.reserve |
number of reserved items |
vm.uma.ddt_entry_cache.keg.rsize |
Real object size with alignment |
vm.uma.ddt_entry_cache.limit |
|
vm.uma.ddt_entry_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ddt_entry_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.ddt_entry_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ddt_entry_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ddt_entry_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.ddt_entry_cache.size |
Allocation size |
vm.uma.ddt_entry_cache.stats |
|
vm.uma.ddt_entry_cache.stats.allocs |
Total allocation calls |
vm.uma.ddt_entry_cache.stats.current |
Current number of allocated items |
vm.uma.ddt_entry_cache.stats.fails |
Number of allocation failures |
vm.uma.ddt_entry_cache.stats.frees |
Total free calls |
vm.uma.ddt_entry_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ddt_entry_flat_cache |
|
vm.uma.ddt_entry_flat_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.ddt_entry_flat_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ddt_entry_flat_cache.domain |
|
vm.uma.ddt_entry_flat_cache.domain.[num] |
|
vm.uma.ddt_entry_flat_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ddt_entry_flat_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.ddt_entry_flat_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.ddt_entry_flat_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.ddt_entry_flat_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.ddt_entry_flat_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ddt_entry_flat_cache.domain.[num].wss |
Working set size |
vm.uma.ddt_entry_flat_cache.flags |
Allocator configuration flags |
vm.uma.ddt_entry_flat_cache.keg |
|
vm.uma.ddt_entry_flat_cache.keg.align |
item alignment mask |
vm.uma.ddt_entry_flat_cache.keg.domain |
|
vm.uma.ddt_entry_flat_cache.keg.domain.[num] |
|
vm.uma.ddt_entry_flat_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ddt_entry_flat_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ddt_entry_flat_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ddt_entry_flat_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ddt_entry_flat_cache.keg.ipers |
items available per-slab |
vm.uma.ddt_entry_flat_cache.keg.name |
Keg name |
vm.uma.ddt_entry_flat_cache.keg.ppera |
pages per-slab allocation |
vm.uma.ddt_entry_flat_cache.keg.reserve |
number of reserved items |
vm.uma.ddt_entry_flat_cache.keg.rsize |
Real object size with alignment |
vm.uma.ddt_entry_flat_cache.limit |
|
vm.uma.ddt_entry_flat_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ddt_entry_flat_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.ddt_entry_flat_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ddt_entry_flat_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ddt_entry_flat_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.ddt_entry_flat_cache.size |
Allocation size |
vm.uma.ddt_entry_flat_cache.stats |
|
vm.uma.ddt_entry_flat_cache.stats.allocs |
Total allocation calls |
vm.uma.ddt_entry_flat_cache.stats.current |
Current number of allocated items |
vm.uma.ddt_entry_flat_cache.stats.fails |
Number of allocation failures |
vm.uma.ddt_entry_flat_cache.stats.frees |
Total free calls |
vm.uma.ddt_entry_flat_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ddt_entry_trad_cache |
|
vm.uma.ddt_entry_trad_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.ddt_entry_trad_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ddt_entry_trad_cache.domain |
|
vm.uma.ddt_entry_trad_cache.domain.[num] |
|
vm.uma.ddt_entry_trad_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ddt_entry_trad_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.ddt_entry_trad_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.ddt_entry_trad_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.ddt_entry_trad_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.ddt_entry_trad_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ddt_entry_trad_cache.domain.[num].wss |
Working set size |
vm.uma.ddt_entry_trad_cache.flags |
Allocator configuration flags |
vm.uma.ddt_entry_trad_cache.keg |
|
vm.uma.ddt_entry_trad_cache.keg.align |
item alignment mask |
vm.uma.ddt_entry_trad_cache.keg.domain |
|
vm.uma.ddt_entry_trad_cache.keg.domain.[num] |
|
vm.uma.ddt_entry_trad_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ddt_entry_trad_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ddt_entry_trad_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ddt_entry_trad_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ddt_entry_trad_cache.keg.ipers |
items available per-slab |
vm.uma.ddt_entry_trad_cache.keg.name |
Keg name |
vm.uma.ddt_entry_trad_cache.keg.ppera |
pages per-slab allocation |
vm.uma.ddt_entry_trad_cache.keg.reserve |
number of reserved items |
vm.uma.ddt_entry_trad_cache.keg.rsize |
Real object size with alignment |
vm.uma.ddt_entry_trad_cache.limit |
|
vm.uma.ddt_entry_trad_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ddt_entry_trad_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.ddt_entry_trad_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ddt_entry_trad_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ddt_entry_trad_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.ddt_entry_trad_cache.size |
Allocation size |
vm.uma.ddt_entry_trad_cache.stats |
|
vm.uma.ddt_entry_trad_cache.stats.allocs |
Total allocation calls |
vm.uma.ddt_entry_trad_cache.stats.current |
Current number of allocated items |
vm.uma.ddt_entry_trad_cache.stats.fails |
Number of allocation failures |
vm.uma.ddt_entry_trad_cache.stats.frees |
Total free calls |
vm.uma.ddt_entry_trad_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ddt_log_entry_flat_cache |
|
vm.uma.ddt_log_entry_flat_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.ddt_log_entry_flat_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ddt_log_entry_flat_cache.domain |
|
vm.uma.ddt_log_entry_flat_cache.domain.[num] |
|
vm.uma.ddt_log_entry_flat_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ddt_log_entry_flat_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.ddt_log_entry_flat_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.ddt_log_entry_flat_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.ddt_log_entry_flat_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.ddt_log_entry_flat_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ddt_log_entry_flat_cache.domain.[num].wss |
Working set size |
vm.uma.ddt_log_entry_flat_cache.flags |
Allocator configuration flags |
vm.uma.ddt_log_entry_flat_cache.keg |
|
vm.uma.ddt_log_entry_flat_cache.keg.align |
item alignment mask |
vm.uma.ddt_log_entry_flat_cache.keg.domain |
|
vm.uma.ddt_log_entry_flat_cache.keg.domain.[num] |
|
vm.uma.ddt_log_entry_flat_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ddt_log_entry_flat_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ddt_log_entry_flat_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ddt_log_entry_flat_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ddt_log_entry_flat_cache.keg.ipers |
items available per-slab |
vm.uma.ddt_log_entry_flat_cache.keg.name |
Keg name |
vm.uma.ddt_log_entry_flat_cache.keg.ppera |
pages per-slab allocation |
vm.uma.ddt_log_entry_flat_cache.keg.reserve |
number of reserved items |
vm.uma.ddt_log_entry_flat_cache.keg.rsize |
Real object size with alignment |
vm.uma.ddt_log_entry_flat_cache.limit |
|
vm.uma.ddt_log_entry_flat_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ddt_log_entry_flat_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.ddt_log_entry_flat_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ddt_log_entry_flat_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ddt_log_entry_flat_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.ddt_log_entry_flat_cache.size |
Allocation size |
vm.uma.ddt_log_entry_flat_cache.stats |
|
vm.uma.ddt_log_entry_flat_cache.stats.allocs |
Total allocation calls |
vm.uma.ddt_log_entry_flat_cache.stats.current |
Current number of allocated items |
vm.uma.ddt_log_entry_flat_cache.stats.fails |
Number of allocation failures |
vm.uma.ddt_log_entry_flat_cache.stats.frees |
Total free calls |
vm.uma.ddt_log_entry_flat_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ddt_log_entry_trad_cache |
|
vm.uma.ddt_log_entry_trad_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.ddt_log_entry_trad_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ddt_log_entry_trad_cache.domain |
|
vm.uma.ddt_log_entry_trad_cache.domain.[num] |
|
vm.uma.ddt_log_entry_trad_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ddt_log_entry_trad_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.ddt_log_entry_trad_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.ddt_log_entry_trad_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.ddt_log_entry_trad_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.ddt_log_entry_trad_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ddt_log_entry_trad_cache.domain.[num].wss |
Working set size |
vm.uma.ddt_log_entry_trad_cache.flags |
Allocator configuration flags |
vm.uma.ddt_log_entry_trad_cache.keg |
|
vm.uma.ddt_log_entry_trad_cache.keg.align |
item alignment mask |
vm.uma.ddt_log_entry_trad_cache.keg.domain |
|
vm.uma.ddt_log_entry_trad_cache.keg.domain.[num] |
|
vm.uma.ddt_log_entry_trad_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ddt_log_entry_trad_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ddt_log_entry_trad_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ddt_log_entry_trad_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ddt_log_entry_trad_cache.keg.ipers |
items available per-slab |
vm.uma.ddt_log_entry_trad_cache.keg.name |
Keg name |
vm.uma.ddt_log_entry_trad_cache.keg.ppera |
pages per-slab allocation |
vm.uma.ddt_log_entry_trad_cache.keg.reserve |
number of reserved items |
vm.uma.ddt_log_entry_trad_cache.keg.rsize |
Real object size with alignment |
vm.uma.ddt_log_entry_trad_cache.limit |
|
vm.uma.ddt_log_entry_trad_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ddt_log_entry_trad_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.ddt_log_entry_trad_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ddt_log_entry_trad_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ddt_log_entry_trad_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.ddt_log_entry_trad_cache.size |
Allocation size |
vm.uma.ddt_log_entry_trad_cache.stats |
|
vm.uma.ddt_log_entry_trad_cache.stats.allocs |
Total allocation calls |
vm.uma.ddt_log_entry_trad_cache.stats.current |
Current number of allocated items |
vm.uma.ddt_log_entry_trad_cache.stats.fails |
Number of allocation failures |
vm.uma.ddt_log_entry_trad_cache.stats.frees |
Total free calls |
vm.uma.ddt_log_entry_trad_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.debugnet_mbuf |
|
vm.uma.debugnet_mbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.debugnet_mbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.debugnet_mbuf.domain |
|
vm.uma.debugnet_mbuf.domain.[num] |
|
vm.uma.debugnet_mbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.debugnet_mbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.debugnet_mbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.debugnet_mbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.debugnet_mbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.debugnet_mbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.debugnet_mbuf.domain.[num].wss |
Working set size |
vm.uma.debugnet_mbuf.flags |
Allocator configuration flags |
vm.uma.debugnet_mbuf.keg |
|
vm.uma.debugnet_mbuf.keg.name |
Keg name |
vm.uma.debugnet_mbuf.limit |
|
vm.uma.debugnet_mbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.debugnet_mbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.debugnet_mbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.debugnet_mbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.debugnet_mbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.debugnet_mbuf.size |
Allocation size |
vm.uma.debugnet_mbuf.stats |
|
vm.uma.debugnet_mbuf.stats.allocs |
Total allocation calls |
vm.uma.debugnet_mbuf.stats.current |
Current number of allocated items |
vm.uma.debugnet_mbuf.stats.fails |
Number of allocation failures |
vm.uma.debugnet_mbuf.stats.frees |
Total free calls |
vm.uma.debugnet_mbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.debugnet_mbuf_cluster |
|
vm.uma.debugnet_mbuf_cluster.bucket_size |
Desired per-cpu cache size |
vm.uma.debugnet_mbuf_cluster.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.debugnet_mbuf_cluster.domain |
|
vm.uma.debugnet_mbuf_cluster.domain.[num] |
|
vm.uma.debugnet_mbuf_cluster.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.debugnet_mbuf_cluster.domain.[num].imax |
maximum item count in this period |
vm.uma.debugnet_mbuf_cluster.domain.[num].imin |
minimum item count in this period |
vm.uma.debugnet_mbuf_cluster.domain.[num].limin |
Long time minimum item count |
vm.uma.debugnet_mbuf_cluster.domain.[num].nitems |
number of items in this domain |
vm.uma.debugnet_mbuf_cluster.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.debugnet_mbuf_cluster.domain.[num].wss |
Working set size |
vm.uma.debugnet_mbuf_cluster.flags |
Allocator configuration flags |
vm.uma.debugnet_mbuf_cluster.keg |
|
vm.uma.debugnet_mbuf_cluster.keg.name |
Keg name |
vm.uma.debugnet_mbuf_cluster.limit |
|
vm.uma.debugnet_mbuf_cluster.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.debugnet_mbuf_cluster.limit.items |
Current number of allocated items if limit is set |
vm.uma.debugnet_mbuf_cluster.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.debugnet_mbuf_cluster.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.debugnet_mbuf_cluster.limit.sleeps |
Total zone limit sleeps |
vm.uma.debugnet_mbuf_cluster.size |
Allocation size |
vm.uma.debugnet_mbuf_cluster.stats |
|
vm.uma.debugnet_mbuf_cluster.stats.allocs |
Total allocation calls |
vm.uma.debugnet_mbuf_cluster.stats.current |
Current number of allocated items |
vm.uma.debugnet_mbuf_cluster.stats.fails |
Number of allocation failures |
vm.uma.debugnet_mbuf_cluster.stats.frees |
Total free calls |
vm.uma.debugnet_mbuf_cluster.stats.xdomain |
Free calls from the wrong domain |
vm.uma.debugnet_mbuf_packet |
|
vm.uma.debugnet_mbuf_packet.bucket_size |
Desired per-cpu cache size |
vm.uma.debugnet_mbuf_packet.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.debugnet_mbuf_packet.domain |
|
vm.uma.debugnet_mbuf_packet.domain.[num] |
|
vm.uma.debugnet_mbuf_packet.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.debugnet_mbuf_packet.domain.[num].imax |
maximum item count in this period |
vm.uma.debugnet_mbuf_packet.domain.[num].imin |
minimum item count in this period |
vm.uma.debugnet_mbuf_packet.domain.[num].limin |
Long time minimum item count |
vm.uma.debugnet_mbuf_packet.domain.[num].nitems |
number of items in this domain |
vm.uma.debugnet_mbuf_packet.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.debugnet_mbuf_packet.domain.[num].wss |
Working set size |
vm.uma.debugnet_mbuf_packet.flags |
Allocator configuration flags |
vm.uma.debugnet_mbuf_packet.keg |
|
vm.uma.debugnet_mbuf_packet.keg.name |
Keg name |
vm.uma.debugnet_mbuf_packet.limit |
|
vm.uma.debugnet_mbuf_packet.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.debugnet_mbuf_packet.limit.items |
Current number of allocated items if limit is set |
vm.uma.debugnet_mbuf_packet.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.debugnet_mbuf_packet.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.debugnet_mbuf_packet.limit.sleeps |
Total zone limit sleeps |
vm.uma.debugnet_mbuf_packet.size |
Allocation size |
vm.uma.debugnet_mbuf_packet.stats |
|
vm.uma.debugnet_mbuf_packet.stats.allocs |
Total allocation calls |
vm.uma.debugnet_mbuf_packet.stats.current |
Current number of allocated items |
vm.uma.debugnet_mbuf_packet.stats.fails |
Number of allocation failures |
vm.uma.debugnet_mbuf_packet.stats.frees |
Total free calls |
vm.uma.debugnet_mbuf_packet.stats.xdomain |
Free calls from the wrong domain |
vm.uma.dmu_buf_impl_t |
|
vm.uma.dmu_buf_impl_t.bucket_size |
Desired per-cpu cache size |
vm.uma.dmu_buf_impl_t.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.dmu_buf_impl_t.domain |
|
vm.uma.dmu_buf_impl_t.domain.[num] |
|
vm.uma.dmu_buf_impl_t.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.dmu_buf_impl_t.domain.[num].imax |
maximum item count in this period |
vm.uma.dmu_buf_impl_t.domain.[num].imin |
minimum item count in this period |
vm.uma.dmu_buf_impl_t.domain.[num].limin |
Long time minimum item count |
vm.uma.dmu_buf_impl_t.domain.[num].nitems |
number of items in this domain |
vm.uma.dmu_buf_impl_t.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.dmu_buf_impl_t.domain.[num].wss |
Working set size |
vm.uma.dmu_buf_impl_t.flags |
Allocator configuration flags |
vm.uma.dmu_buf_impl_t.keg |
|
vm.uma.dmu_buf_impl_t.keg.align |
item alignment mask |
vm.uma.dmu_buf_impl_t.keg.domain |
|
vm.uma.dmu_buf_impl_t.keg.domain.[num] |
|
vm.uma.dmu_buf_impl_t.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.dmu_buf_impl_t.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.dmu_buf_impl_t.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.dmu_buf_impl_t.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.dmu_buf_impl_t.keg.ipers |
items available per-slab |
vm.uma.dmu_buf_impl_t.keg.name |
Keg name |
vm.uma.dmu_buf_impl_t.keg.ppera |
pages per-slab allocation |
vm.uma.dmu_buf_impl_t.keg.reserve |
number of reserved items |
vm.uma.dmu_buf_impl_t.keg.rsize |
Real object size with alignment |
vm.uma.dmu_buf_impl_t.limit |
|
vm.uma.dmu_buf_impl_t.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.dmu_buf_impl_t.limit.items |
Current number of allocated items if limit is set |
vm.uma.dmu_buf_impl_t.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.dmu_buf_impl_t.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.dmu_buf_impl_t.limit.sleeps |
Total zone limit sleeps |
vm.uma.dmu_buf_impl_t.size |
Allocation size |
vm.uma.dmu_buf_impl_t.stats |
|
vm.uma.dmu_buf_impl_t.stats.allocs |
Total allocation calls |
vm.uma.dmu_buf_impl_t.stats.current |
Current number of allocated items |
vm.uma.dmu_buf_impl_t.stats.fails |
Number of allocation failures |
vm.uma.dmu_buf_impl_t.stats.frees |
Total free calls |
vm.uma.dmu_buf_impl_t.stats.xdomain |
Free calls from the wrong domain |
vm.uma.dnode_t |
|
vm.uma.dnode_t.bucket_size |
Desired per-cpu cache size |
vm.uma.dnode_t.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.dnode_t.domain |
|
vm.uma.dnode_t.domain.[num] |
|
vm.uma.dnode_t.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.dnode_t.domain.[num].imax |
maximum item count in this period |
vm.uma.dnode_t.domain.[num].imin |
minimum item count in this period |
vm.uma.dnode_t.domain.[num].limin |
Long time minimum item count |
vm.uma.dnode_t.domain.[num].nitems |
number of items in this domain |
vm.uma.dnode_t.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.dnode_t.domain.[num].wss |
Working set size |
vm.uma.dnode_t.flags |
Allocator configuration flags |
vm.uma.dnode_t.keg |
|
vm.uma.dnode_t.keg.align |
item alignment mask |
vm.uma.dnode_t.keg.domain |
|
vm.uma.dnode_t.keg.domain.[num] |
|
vm.uma.dnode_t.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.dnode_t.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.dnode_t.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.dnode_t.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.dnode_t.keg.ipers |
items available per-slab |
vm.uma.dnode_t.keg.name |
Keg name |
vm.uma.dnode_t.keg.ppera |
pages per-slab allocation |
vm.uma.dnode_t.keg.reserve |
number of reserved items |
vm.uma.dnode_t.keg.rsize |
Real object size with alignment |
vm.uma.dnode_t.limit |
|
vm.uma.dnode_t.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.dnode_t.limit.items |
Current number of allocated items if limit is set |
vm.uma.dnode_t.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.dnode_t.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.dnode_t.limit.sleeps |
Total zone limit sleeps |
vm.uma.dnode_t.size |
Allocation size |
vm.uma.dnode_t.stats |
|
vm.uma.dnode_t.stats.allocs |
Total allocation calls |
vm.uma.dnode_t.stats.current |
Current number of allocated items |
vm.uma.dnode_t.stats.fails |
Number of allocation failures |
vm.uma.dnode_t.stats.frees |
Total free calls |
vm.uma.dnode_t.stats.xdomain |
Free calls from the wrong domain |
vm.uma.domainset |
|
vm.uma.domainset.bucket_size |
Desired per-cpu cache size |
vm.uma.domainset.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.domainset.domain |
|
vm.uma.domainset.domain.[num] |
|
vm.uma.domainset.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.domainset.domain.[num].imax |
maximum item count in this period |
vm.uma.domainset.domain.[num].imin |
minimum item count in this period |
vm.uma.domainset.domain.[num].limin |
Long time minimum item count |
vm.uma.domainset.domain.[num].nitems |
number of items in this domain |
vm.uma.domainset.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.domainset.domain.[num].wss |
Working set size |
vm.uma.domainset.flags |
Allocator configuration flags |
vm.uma.domainset.keg |
|
vm.uma.domainset.keg.align |
item alignment mask |
vm.uma.domainset.keg.domain |
|
vm.uma.domainset.keg.domain.[num] |
|
vm.uma.domainset.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.domainset.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.domainset.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.domainset.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.domainset.keg.ipers |
items available per-slab |
vm.uma.domainset.keg.name |
Keg name |
vm.uma.domainset.keg.ppera |
pages per-slab allocation |
vm.uma.domainset.keg.reserve |
number of reserved items |
vm.uma.domainset.keg.rsize |
Real object size with alignment |
vm.uma.domainset.limit |
|
vm.uma.domainset.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.domainset.limit.items |
Current number of allocated items if limit is set |
vm.uma.domainset.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.domainset.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.domainset.limit.sleeps |
Total zone limit sleeps |
vm.uma.domainset.size |
Allocation size |
vm.uma.domainset.stats |
|
vm.uma.domainset.stats.allocs |
Total allocation calls |
vm.uma.domainset.stats.current |
Current number of allocated items |
vm.uma.domainset.stats.fails |
Number of allocation failures |
vm.uma.domainset.stats.frees |
Total free calls |
vm.uma.domainset.stats.xdomain |
Free calls from the wrong domain |
vm.uma.drm_buddy_block |
|
vm.uma.drm_buddy_block.bucket_size |
Desired per-cpu cache size |
vm.uma.drm_buddy_block.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.drm_buddy_block.domain |
|
vm.uma.drm_buddy_block.domain.[num] |
|
vm.uma.drm_buddy_block.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.drm_buddy_block.domain.[num].imax |
maximum item count in this period |
vm.uma.drm_buddy_block.domain.[num].imin |
minimum item count in this period |
vm.uma.drm_buddy_block.domain.[num].limin |
Long time minimum item count |
vm.uma.drm_buddy_block.domain.[num].nitems |
number of items in this domain |
vm.uma.drm_buddy_block.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.drm_buddy_block.domain.[num].wss |
Working set size |
vm.uma.drm_buddy_block.flags |
Allocator configuration flags |
vm.uma.drm_buddy_block.keg |
|
vm.uma.drm_buddy_block.keg.align |
item alignment mask |
vm.uma.drm_buddy_block.keg.domain |
|
vm.uma.drm_buddy_block.keg.domain.[num] |
|
vm.uma.drm_buddy_block.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.drm_buddy_block.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.drm_buddy_block.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.drm_buddy_block.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.drm_buddy_block.keg.ipers |
items available per-slab |
vm.uma.drm_buddy_block.keg.name |
Keg name |
vm.uma.drm_buddy_block.keg.ppera |
pages per-slab allocation |
vm.uma.drm_buddy_block.keg.reserve |
number of reserved items |
vm.uma.drm_buddy_block.keg.rsize |
Real object size with alignment |
vm.uma.drm_buddy_block.limit |
|
vm.uma.drm_buddy_block.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.drm_buddy_block.limit.items |
Current number of allocated items if limit is set |
vm.uma.drm_buddy_block.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.drm_buddy_block.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.drm_buddy_block.limit.sleeps |
Total zone limit sleeps |
vm.uma.drm_buddy_block.size |
Allocation size |
vm.uma.drm_buddy_block.stats |
|
vm.uma.drm_buddy_block.stats.allocs |
Total allocation calls |
vm.uma.drm_buddy_block.stats.current |
Current number of allocated items |
vm.uma.drm_buddy_block.stats.fails |
Number of allocation failures |
vm.uma.drm_buddy_block.stats.frees |
Total free calls |
vm.uma.drm_buddy_block.stats.xdomain |
Free calls from the wrong domain |
vm.uma.drm_sched_fence |
|
vm.uma.drm_sched_fence.bucket_size |
Desired per-cpu cache size |
vm.uma.drm_sched_fence.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.drm_sched_fence.domain |
|
vm.uma.drm_sched_fence.domain.[num] |
|
vm.uma.drm_sched_fence.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.drm_sched_fence.domain.[num].imax |
maximum item count in this period |
vm.uma.drm_sched_fence.domain.[num].imin |
minimum item count in this period |
vm.uma.drm_sched_fence.domain.[num].limin |
Long time minimum item count |
vm.uma.drm_sched_fence.domain.[num].nitems |
number of items in this domain |
vm.uma.drm_sched_fence.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.drm_sched_fence.domain.[num].wss |
Working set size |
vm.uma.drm_sched_fence.flags |
Allocator configuration flags |
vm.uma.drm_sched_fence.keg |
|
vm.uma.drm_sched_fence.keg.align |
item alignment mask |
vm.uma.drm_sched_fence.keg.domain |
|
vm.uma.drm_sched_fence.keg.domain.[num] |
|
vm.uma.drm_sched_fence.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.drm_sched_fence.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.drm_sched_fence.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.drm_sched_fence.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.drm_sched_fence.keg.ipers |
items available per-slab |
vm.uma.drm_sched_fence.keg.name |
Keg name |
vm.uma.drm_sched_fence.keg.ppera |
pages per-slab allocation |
vm.uma.drm_sched_fence.keg.reserve |
number of reserved items |
vm.uma.drm_sched_fence.keg.rsize |
Real object size with alignment |
vm.uma.drm_sched_fence.limit |
|
vm.uma.drm_sched_fence.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.drm_sched_fence.limit.items |
Current number of allocated items if limit is set |
vm.uma.drm_sched_fence.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.drm_sched_fence.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.drm_sched_fence.limit.sleeps |
Total zone limit sleeps |
vm.uma.drm_sched_fence.size |
Allocation size |
vm.uma.drm_sched_fence.stats |
|
vm.uma.drm_sched_fence.stats.allocs |
Total allocation calls |
vm.uma.drm_sched_fence.stats.current |
Current number of allocated items |
vm.uma.drm_sched_fence.stats.fails |
Number of allocation failures |
vm.uma.drm_sched_fence.stats.frees |
Total free calls |
vm.uma.drm_sched_fence.stats.xdomain |
Free calls from the wrong domain |
vm.uma.epoch_record_pcpu |
|
vm.uma.epoch_record_pcpu.bucket_size |
Desired per-cpu cache size |
vm.uma.epoch_record_pcpu.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.epoch_record_pcpu.domain |
|
vm.uma.epoch_record_pcpu.domain.[num] |
|
vm.uma.epoch_record_pcpu.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.epoch_record_pcpu.domain.[num].imax |
maximum item count in this period |
vm.uma.epoch_record_pcpu.domain.[num].imin |
minimum item count in this period |
vm.uma.epoch_record_pcpu.domain.[num].limin |
Long time minimum item count |
vm.uma.epoch_record_pcpu.domain.[num].nitems |
number of items in this domain |
vm.uma.epoch_record_pcpu.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.epoch_record_pcpu.domain.[num].wss |
Working set size |
vm.uma.epoch_record_pcpu.flags |
Allocator configuration flags |
vm.uma.epoch_record_pcpu.keg |
|
vm.uma.epoch_record_pcpu.keg.align |
item alignment mask |
vm.uma.epoch_record_pcpu.keg.domain |
|
vm.uma.epoch_record_pcpu.keg.domain.[num] |
|
vm.uma.epoch_record_pcpu.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.epoch_record_pcpu.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.epoch_record_pcpu.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.epoch_record_pcpu.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.epoch_record_pcpu.keg.ipers |
items available per-slab |
vm.uma.epoch_record_pcpu.keg.name |
Keg name |
vm.uma.epoch_record_pcpu.keg.ppera |
pages per-slab allocation |
vm.uma.epoch_record_pcpu.keg.reserve |
number of reserved items |
vm.uma.epoch_record_pcpu.keg.rsize |
Real object size with alignment |
vm.uma.epoch_record_pcpu.limit |
|
vm.uma.epoch_record_pcpu.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.epoch_record_pcpu.limit.items |
Current number of allocated items if limit is set |
vm.uma.epoch_record_pcpu.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.epoch_record_pcpu.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.epoch_record_pcpu.limit.sleeps |
Total zone limit sleeps |
vm.uma.epoch_record_pcpu.size |
Allocation size |
vm.uma.epoch_record_pcpu.stats |
|
vm.uma.epoch_record_pcpu.stats.allocs |
Total allocation calls |
vm.uma.epoch_record_pcpu.stats.current |
Current number of allocated items |
vm.uma.epoch_record_pcpu.stats.fails |
Number of allocation failures |
vm.uma.epoch_record_pcpu.stats.frees |
Total free calls |
vm.uma.epoch_record_pcpu.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ertt |
|
vm.uma.ertt.bucket_size |
Desired per-cpu cache size |
vm.uma.ertt.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ertt.domain |
|
vm.uma.ertt.domain.[num] |
|
vm.uma.ertt.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ertt.domain.[num].imax |
maximum item count in this period |
vm.uma.ertt.domain.[num].imin |
minimum item count in this period |
vm.uma.ertt.domain.[num].limin |
Long time minimum item count |
vm.uma.ertt.domain.[num].nitems |
number of items in this domain |
vm.uma.ertt.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ertt.domain.[num].wss |
Working set size |
vm.uma.ertt.flags |
Allocator configuration flags |
vm.uma.ertt.keg |
|
vm.uma.ertt.keg.align |
item alignment mask |
vm.uma.ertt.keg.domain |
|
vm.uma.ertt.keg.domain.[num] |
|
vm.uma.ertt.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ertt.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ertt.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ertt.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ertt.keg.ipers |
items available per-slab |
vm.uma.ertt.keg.name |
Keg name |
vm.uma.ertt.keg.ppera |
pages per-slab allocation |
vm.uma.ertt.keg.reserve |
number of reserved items |
vm.uma.ertt.keg.rsize |
Real object size with alignment |
vm.uma.ertt.limit |
|
vm.uma.ertt.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ertt.limit.items |
Current number of allocated items if limit is set |
vm.uma.ertt.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ertt.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ertt.limit.sleeps |
Total zone limit sleeps |
vm.uma.ertt.size |
Allocation size |
vm.uma.ertt.stats |
|
vm.uma.ertt.stats.allocs |
Total allocation calls |
vm.uma.ertt.stats.current |
Current number of allocated items |
vm.uma.ertt.stats.fails |
Number of allocation failures |
vm.uma.ertt.stats.frees |
Total free calls |
vm.uma.ertt.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ertt_txseginfo |
|
vm.uma.ertt_txseginfo.bucket_size |
Desired per-cpu cache size |
vm.uma.ertt_txseginfo.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ertt_txseginfo.domain |
|
vm.uma.ertt_txseginfo.domain.[num] |
|
vm.uma.ertt_txseginfo.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ertt_txseginfo.domain.[num].imax |
maximum item count in this period |
vm.uma.ertt_txseginfo.domain.[num].imin |
minimum item count in this period |
vm.uma.ertt_txseginfo.domain.[num].limin |
Long time minimum item count |
vm.uma.ertt_txseginfo.domain.[num].nitems |
number of items in this domain |
vm.uma.ertt_txseginfo.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ertt_txseginfo.domain.[num].wss |
Working set size |
vm.uma.ertt_txseginfo.flags |
Allocator configuration flags |
vm.uma.ertt_txseginfo.keg |
|
vm.uma.ertt_txseginfo.keg.align |
item alignment mask |
vm.uma.ertt_txseginfo.keg.domain |
|
vm.uma.ertt_txseginfo.keg.domain.[num] |
|
vm.uma.ertt_txseginfo.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ertt_txseginfo.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ertt_txseginfo.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ertt_txseginfo.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ertt_txseginfo.keg.ipers |
items available per-slab |
vm.uma.ertt_txseginfo.keg.name |
Keg name |
vm.uma.ertt_txseginfo.keg.ppera |
pages per-slab allocation |
vm.uma.ertt_txseginfo.keg.reserve |
number of reserved items |
vm.uma.ertt_txseginfo.keg.rsize |
Real object size with alignment |
vm.uma.ertt_txseginfo.limit |
|
vm.uma.ertt_txseginfo.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ertt_txseginfo.limit.items |
Current number of allocated items if limit is set |
vm.uma.ertt_txseginfo.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ertt_txseginfo.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ertt_txseginfo.limit.sleeps |
Total zone limit sleeps |
vm.uma.ertt_txseginfo.size |
Allocation size |
vm.uma.ertt_txseginfo.stats |
|
vm.uma.ertt_txseginfo.stats.allocs |
Total allocation calls |
vm.uma.ertt_txseginfo.stats.current |
Current number of allocated items |
vm.uma.ertt_txseginfo.stats.fails |
Number of allocation failures |
vm.uma.ertt_txseginfo.stats.frees |
Total free calls |
vm.uma.ertt_txseginfo.stats.xdomain |
Free calls from the wrong domain |
vm.uma.fakepg |
|
vm.uma.fakepg.bucket_size |
Desired per-cpu cache size |
vm.uma.fakepg.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.fakepg.domain |
|
vm.uma.fakepg.domain.[num] |
|
vm.uma.fakepg.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.fakepg.domain.[num].imax |
maximum item count in this period |
vm.uma.fakepg.domain.[num].imin |
minimum item count in this period |
vm.uma.fakepg.domain.[num].limin |
Long time minimum item count |
vm.uma.fakepg.domain.[num].nitems |
number of items in this domain |
vm.uma.fakepg.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.fakepg.domain.[num].wss |
Working set size |
vm.uma.fakepg.flags |
Allocator configuration flags |
vm.uma.fakepg.keg |
|
vm.uma.fakepg.keg.align |
item alignment mask |
vm.uma.fakepg.keg.domain |
|
vm.uma.fakepg.keg.domain.[num] |
|
vm.uma.fakepg.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.fakepg.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.fakepg.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.fakepg.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.fakepg.keg.ipers |
items available per-slab |
vm.uma.fakepg.keg.name |
Keg name |
vm.uma.fakepg.keg.ppera |
pages per-slab allocation |
vm.uma.fakepg.keg.reserve |
number of reserved items |
vm.uma.fakepg.keg.rsize |
Real object size with alignment |
vm.uma.fakepg.limit |
|
vm.uma.fakepg.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.fakepg.limit.items |
Current number of allocated items if limit is set |
vm.uma.fakepg.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.fakepg.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.fakepg.limit.sleeps |
Total zone limit sleeps |
vm.uma.fakepg.size |
Allocation size |
vm.uma.fakepg.stats |
|
vm.uma.fakepg.stats.allocs |
Total allocation calls |
vm.uma.fakepg.stats.current |
Current number of allocated items |
vm.uma.fakepg.stats.fails |
Number of allocation failures |
vm.uma.fakepg.stats.frees |
Total free calls |
vm.uma.fakepg.stats.xdomain |
Free calls from the wrong domain |
vm.uma.filedesc[num] |
|
vm.uma.filedesc[num].bucket_size |
Desired per-cpu cache size |
vm.uma.filedesc[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.filedesc[num].domain |
|
vm.uma.filedesc[num].domain.[num] |
|
vm.uma.filedesc[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.filedesc[num].domain.[num].imax |
maximum item count in this period |
vm.uma.filedesc[num].domain.[num].imin |
minimum item count in this period |
vm.uma.filedesc[num].domain.[num].limin |
Long time minimum item count |
vm.uma.filedesc[num].domain.[num].nitems |
number of items in this domain |
vm.uma.filedesc[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.filedesc[num].domain.[num].wss |
Working set size |
vm.uma.filedesc[num].flags |
Allocator configuration flags |
vm.uma.filedesc[num].keg |
|
vm.uma.filedesc[num].keg.align |
item alignment mask |
vm.uma.filedesc[num].keg.domain |
|
vm.uma.filedesc[num].keg.domain.[num] |
|
vm.uma.filedesc[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.filedesc[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.filedesc[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.filedesc[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.filedesc[num].keg.ipers |
items available per-slab |
vm.uma.filedesc[num].keg.name |
Keg name |
vm.uma.filedesc[num].keg.ppera |
pages per-slab allocation |
vm.uma.filedesc[num].keg.reserve |
number of reserved items |
vm.uma.filedesc[num].keg.rsize |
Real object size with alignment |
vm.uma.filedesc[num].limit |
|
vm.uma.filedesc[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.filedesc[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.filedesc[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.filedesc[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.filedesc[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.filedesc[num].size |
Allocation size |
vm.uma.filedesc[num].stats |
|
vm.uma.filedesc[num].stats.allocs |
Total allocation calls |
vm.uma.filedesc[num].stats.current |
Current number of allocated items |
vm.uma.filedesc[num].stats.fails |
Number of allocation failures |
vm.uma.filedesc[num].stats.frees |
Total free calls |
vm.uma.filedesc[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.fuse_ticket |
|
vm.uma.fuse_ticket.bucket_size |
Desired per-cpu cache size |
vm.uma.fuse_ticket.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.fuse_ticket.domain |
|
vm.uma.fuse_ticket.domain.[num] |
|
vm.uma.fuse_ticket.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.fuse_ticket.domain.[num].imax |
maximum item count in this period |
vm.uma.fuse_ticket.domain.[num].imin |
minimum item count in this period |
vm.uma.fuse_ticket.domain.[num].limin |
Long time minimum item count |
vm.uma.fuse_ticket.domain.[num].nitems |
number of items in this domain |
vm.uma.fuse_ticket.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.fuse_ticket.domain.[num].wss |
Working set size |
vm.uma.fuse_ticket.flags |
Allocator configuration flags |
vm.uma.fuse_ticket.keg |
|
vm.uma.fuse_ticket.keg.align |
item alignment mask |
vm.uma.fuse_ticket.keg.domain |
|
vm.uma.fuse_ticket.keg.domain.[num] |
|
vm.uma.fuse_ticket.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.fuse_ticket.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.fuse_ticket.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.fuse_ticket.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.fuse_ticket.keg.ipers |
items available per-slab |
vm.uma.fuse_ticket.keg.name |
Keg name |
vm.uma.fuse_ticket.keg.ppera |
pages per-slab allocation |
vm.uma.fuse_ticket.keg.reserve |
number of reserved items |
vm.uma.fuse_ticket.keg.rsize |
Real object size with alignment |
vm.uma.fuse_ticket.limit |
|
vm.uma.fuse_ticket.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.fuse_ticket.limit.items |
Current number of allocated items if limit is set |
vm.uma.fuse_ticket.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.fuse_ticket.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.fuse_ticket.limit.sleeps |
Total zone limit sleeps |
vm.uma.fuse_ticket.size |
Allocation size |
vm.uma.fuse_ticket.stats |
|
vm.uma.fuse_ticket.stats.allocs |
Total allocation calls |
vm.uma.fuse_ticket.stats.current |
Current number of allocated items |
vm.uma.fuse_ticket.stats.fails |
Number of allocation failures |
vm.uma.fuse_ticket.stats.frees |
Total free calls |
vm.uma.fuse_ticket.stats.xdomain |
Free calls from the wrong domain |
vm.uma.g_bio |
|
vm.uma.g_bio.bucket_size |
Desired per-cpu cache size |
vm.uma.g_bio.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.g_bio.domain |
|
vm.uma.g_bio.domain.[num] |
|
vm.uma.g_bio.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.g_bio.domain.[num].imax |
maximum item count in this period |
vm.uma.g_bio.domain.[num].imin |
minimum item count in this period |
vm.uma.g_bio.domain.[num].limin |
Long time minimum item count |
vm.uma.g_bio.domain.[num].nitems |
number of items in this domain |
vm.uma.g_bio.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.g_bio.domain.[num].wss |
Working set size |
vm.uma.g_bio.flags |
Allocator configuration flags |
vm.uma.g_bio.keg |
|
vm.uma.g_bio.keg.align |
item alignment mask |
vm.uma.g_bio.keg.domain |
|
vm.uma.g_bio.keg.domain.[num] |
|
vm.uma.g_bio.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.g_bio.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.g_bio.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.g_bio.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.g_bio.keg.ipers |
items available per-slab |
vm.uma.g_bio.keg.name |
Keg name |
vm.uma.g_bio.keg.ppera |
pages per-slab allocation |
vm.uma.g_bio.keg.reserve |
number of reserved items |
vm.uma.g_bio.keg.rsize |
Real object size with alignment |
vm.uma.g_bio.limit |
|
vm.uma.g_bio.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.g_bio.limit.items |
Current number of allocated items if limit is set |
vm.uma.g_bio.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.g_bio.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.g_bio.limit.sleeps |
Total zone limit sleeps |
vm.uma.g_bio.size |
Allocation size |
vm.uma.g_bio.stats |
|
vm.uma.g_bio.stats.allocs |
Total allocation calls |
vm.uma.g_bio.stats.current |
Current number of allocated items |
vm.uma.g_bio.stats.fails |
Number of allocation failures |
vm.uma.g_bio.stats.frees |
Total free calls |
vm.uma.g_bio.stats.xdomain |
Free calls from the wrong domain |
vm.uma.hostcache |
|
vm.uma.hostcache.bucket_size |
Desired per-cpu cache size |
vm.uma.hostcache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.hostcache.domain |
|
vm.uma.hostcache.domain.[num] |
|
vm.uma.hostcache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.hostcache.domain.[num].imax |
maximum item count in this period |
vm.uma.hostcache.domain.[num].imin |
minimum item count in this period |
vm.uma.hostcache.domain.[num].limin |
Long time minimum item count |
vm.uma.hostcache.domain.[num].nitems |
number of items in this domain |
vm.uma.hostcache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.hostcache.domain.[num].wss |
Working set size |
vm.uma.hostcache.flags |
Allocator configuration flags |
vm.uma.hostcache.keg |
|
vm.uma.hostcache.keg.align |
item alignment mask |
vm.uma.hostcache.keg.domain |
|
vm.uma.hostcache.keg.domain.[num] |
|
vm.uma.hostcache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.hostcache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.hostcache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.hostcache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.hostcache.keg.ipers |
items available per-slab |
vm.uma.hostcache.keg.name |
Keg name |
vm.uma.hostcache.keg.ppera |
pages per-slab allocation |
vm.uma.hostcache.keg.reserve |
number of reserved items |
vm.uma.hostcache.keg.rsize |
Real object size with alignment |
vm.uma.hostcache.limit |
|
vm.uma.hostcache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.hostcache.limit.items |
Current number of allocated items if limit is set |
vm.uma.hostcache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.hostcache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.hostcache.limit.sleeps |
Total zone limit sleeps |
vm.uma.hostcache.size |
Allocation size |
vm.uma.hostcache.stats |
|
vm.uma.hostcache.stats.allocs |
Total allocation calls |
vm.uma.hostcache.stats.current |
Current number of allocated items |
vm.uma.hostcache.stats.fails |
Number of allocation failures |
vm.uma.hostcache.stats.frees |
Total free calls |
vm.uma.hostcache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.hostcache_[num] |
|
vm.uma.hostcache_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.hostcache_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.hostcache_[num].domain |
|
vm.uma.hostcache_[num].domain.[num] |
|
vm.uma.hostcache_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.hostcache_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.hostcache_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.hostcache_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.hostcache_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.hostcache_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.hostcache_[num].domain.[num].wss |
Working set size |
vm.uma.hostcache_[num].flags |
Allocator configuration flags |
vm.uma.hostcache_[num].keg |
|
vm.uma.hostcache_[num].keg.align |
item alignment mask |
vm.uma.hostcache_[num].keg.domain |
|
vm.uma.hostcache_[num].keg.domain.[num] |
|
vm.uma.hostcache_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.hostcache_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.hostcache_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.hostcache_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.hostcache_[num].keg.ipers |
items available per-slab |
vm.uma.hostcache_[num].keg.name |
Keg name |
vm.uma.hostcache_[num].keg.ppera |
pages per-slab allocation |
vm.uma.hostcache_[num].keg.reserve |
number of reserved items |
vm.uma.hostcache_[num].keg.rsize |
Real object size with alignment |
vm.uma.hostcache_[num].limit |
|
vm.uma.hostcache_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.hostcache_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.hostcache_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.hostcache_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.hostcache_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.hostcache_[num].size |
Allocation size |
vm.uma.hostcache_[num].stats |
|
vm.uma.hostcache_[num].stats.allocs |
Total allocation calls |
vm.uma.hostcache_[num].stats.current |
Current number of allocated items |
vm.uma.hostcache_[num].stats.fails |
Number of allocation failures |
vm.uma.hostcache_[num].stats.frees |
Total free calls |
vm.uma.hostcache_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.ipq |
|
vm.uma.ipq.bucket_size |
Desired per-cpu cache size |
vm.uma.ipq.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ipq.domain |
|
vm.uma.ipq.domain.[num] |
|
vm.uma.ipq.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ipq.domain.[num].imax |
maximum item count in this period |
vm.uma.ipq.domain.[num].imin |
minimum item count in this period |
vm.uma.ipq.domain.[num].limin |
Long time minimum item count |
vm.uma.ipq.domain.[num].nitems |
number of items in this domain |
vm.uma.ipq.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ipq.domain.[num].wss |
Working set size |
vm.uma.ipq.flags |
Allocator configuration flags |
vm.uma.ipq.keg |
|
vm.uma.ipq.keg.align |
item alignment mask |
vm.uma.ipq.keg.domain |
|
vm.uma.ipq.keg.domain.[num] |
|
vm.uma.ipq.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ipq.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ipq.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ipq.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ipq.keg.ipers |
items available per-slab |
vm.uma.ipq.keg.name |
Keg name |
vm.uma.ipq.keg.ppera |
pages per-slab allocation |
vm.uma.ipq.keg.reserve |
number of reserved items |
vm.uma.ipq.keg.rsize |
Real object size with alignment |
vm.uma.ipq.limit |
|
vm.uma.ipq.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ipq.limit.items |
Current number of allocated items if limit is set |
vm.uma.ipq.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ipq.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ipq.limit.sleeps |
Total zone limit sleeps |
vm.uma.ipq.size |
Allocation size |
vm.uma.ipq.stats |
|
vm.uma.ipq.stats.allocs |
Total allocation calls |
vm.uma.ipq.stats.current |
Current number of allocated items |
vm.uma.ipq.stats.fails |
Number of allocation failures |
vm.uma.ipq.stats.frees |
Total free calls |
vm.uma.ipq.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ipq_[num] |
|
vm.uma.ipq_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.ipq_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ipq_[num].domain |
|
vm.uma.ipq_[num].domain.[num] |
|
vm.uma.ipq_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ipq_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.ipq_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.ipq_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.ipq_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.ipq_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ipq_[num].domain.[num].wss |
Working set size |
vm.uma.ipq_[num].flags |
Allocator configuration flags |
vm.uma.ipq_[num].keg |
|
vm.uma.ipq_[num].keg.align |
item alignment mask |
vm.uma.ipq_[num].keg.domain |
|
vm.uma.ipq_[num].keg.domain.[num] |
|
vm.uma.ipq_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ipq_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ipq_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ipq_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ipq_[num].keg.ipers |
items available per-slab |
vm.uma.ipq_[num].keg.name |
Keg name |
vm.uma.ipq_[num].keg.ppera |
pages per-slab allocation |
vm.uma.ipq_[num].keg.reserve |
number of reserved items |
vm.uma.ipq_[num].keg.rsize |
Real object size with alignment |
vm.uma.ipq_[num].limit |
|
vm.uma.ipq_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ipq_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.ipq_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ipq_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ipq_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.ipq_[num].size |
Allocation size |
vm.uma.ipq_[num].stats |
|
vm.uma.ipq_[num].stats.allocs |
Total allocation calls |
vm.uma.ipq_[num].stats.current |
Current number of allocated items |
vm.uma.ipq_[num].stats.fails |
Number of allocation failures |
vm.uma.ipq_[num].stats.frees |
Total free calls |
vm.uma.ipq_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.itimer |
|
vm.uma.itimer.bucket_size |
Desired per-cpu cache size |
vm.uma.itimer.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.itimer.domain |
|
vm.uma.itimer.domain.[num] |
|
vm.uma.itimer.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.itimer.domain.[num].imax |
maximum item count in this period |
vm.uma.itimer.domain.[num].imin |
minimum item count in this period |
vm.uma.itimer.domain.[num].limin |
Long time minimum item count |
vm.uma.itimer.domain.[num].nitems |
number of items in this domain |
vm.uma.itimer.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.itimer.domain.[num].wss |
Working set size |
vm.uma.itimer.flags |
Allocator configuration flags |
vm.uma.itimer.keg |
|
vm.uma.itimer.keg.align |
item alignment mask |
vm.uma.itimer.keg.domain |
|
vm.uma.itimer.keg.domain.[num] |
|
vm.uma.itimer.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.itimer.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.itimer.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.itimer.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.itimer.keg.ipers |
items available per-slab |
vm.uma.itimer.keg.name |
Keg name |
vm.uma.itimer.keg.ppera |
pages per-slab allocation |
vm.uma.itimer.keg.reserve |
number of reserved items |
vm.uma.itimer.keg.rsize |
Real object size with alignment |
vm.uma.itimer.limit |
|
vm.uma.itimer.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.itimer.limit.items |
Current number of allocated items if limit is set |
vm.uma.itimer.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.itimer.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.itimer.limit.sleeps |
Total zone limit sleeps |
vm.uma.itimer.size |
Allocation size |
vm.uma.itimer.stats |
|
vm.uma.itimer.stats.allocs |
Total allocation calls |
vm.uma.itimer.stats.current |
Current number of allocated items |
vm.uma.itimer.stats.fails |
Number of allocation failures |
vm.uma.itimer.stats.frees |
Total free calls |
vm.uma.itimer.stats.xdomain |
Free calls from the wrong domain |
vm.uma.iwl_cmd_pool:iwlwifi[num] |
|
vm.uma.iwl_cmd_pool:iwlwifi[num].bucket_size |
Desired per-cpu cache size |
vm.uma.iwl_cmd_pool:iwlwifi[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.iwl_cmd_pool:iwlwifi[num].domain |
|
vm.uma.iwl_cmd_pool:iwlwifi[num].domain.[num] |
|
vm.uma.iwl_cmd_pool:iwlwifi[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.iwl_cmd_pool:iwlwifi[num].domain.[num].imax |
maximum item count in this period |
vm.uma.iwl_cmd_pool:iwlwifi[num].domain.[num].imin |
minimum item count in this period |
vm.uma.iwl_cmd_pool:iwlwifi[num].domain.[num].limin |
Long time minimum item count |
vm.uma.iwl_cmd_pool:iwlwifi[num].domain.[num].nitems |
number of items in this domain |
vm.uma.iwl_cmd_pool:iwlwifi[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.iwl_cmd_pool:iwlwifi[num].domain.[num].wss |
Working set size |
vm.uma.iwl_cmd_pool:iwlwifi[num].flags |
Allocator configuration flags |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg |
|
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.align |
item alignment mask |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.domain |
|
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.domain.[num] |
|
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.ipers |
items available per-slab |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.name |
Keg name |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.ppera |
pages per-slab allocation |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.reserve |
number of reserved items |
vm.uma.iwl_cmd_pool:iwlwifi[num].keg.rsize |
Real object size with alignment |
vm.uma.iwl_cmd_pool:iwlwifi[num].limit |
|
vm.uma.iwl_cmd_pool:iwlwifi[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.iwl_cmd_pool:iwlwifi[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.iwl_cmd_pool:iwlwifi[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.iwl_cmd_pool:iwlwifi[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.iwl_cmd_pool:iwlwifi[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.iwl_cmd_pool:iwlwifi[num].size |
Allocation size |
vm.uma.iwl_cmd_pool:iwlwifi[num].stats |
|
vm.uma.iwl_cmd_pool:iwlwifi[num].stats.allocs |
Total allocation calls |
vm.uma.iwl_cmd_pool:iwlwifi[num].stats.current |
Current number of allocated items |
vm.uma.iwl_cmd_pool:iwlwifi[num].stats.fails |
Number of allocation failures |
vm.uma.iwl_cmd_pool:iwlwifi[num].stats.frees |
Total free calls |
vm.uma.iwl_cmd_pool:iwlwifi[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.iwlwifi:bc |
|
vm.uma.iwlwifi:bc.bucket_size |
Desired per-cpu cache size |
vm.uma.iwlwifi:bc.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.iwlwifi:bc.domain |
|
vm.uma.iwlwifi:bc.domain.[num] |
|
vm.uma.iwlwifi:bc.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.iwlwifi:bc.domain.[num].imax |
maximum item count in this period |
vm.uma.iwlwifi:bc.domain.[num].imin |
minimum item count in this period |
vm.uma.iwlwifi:bc.domain.[num].limin |
Long time minimum item count |
vm.uma.iwlwifi:bc.domain.[num].nitems |
number of items in this domain |
vm.uma.iwlwifi:bc.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.iwlwifi:bc.domain.[num].wss |
Working set size |
vm.uma.iwlwifi:bc.flags |
Allocator configuration flags |
vm.uma.iwlwifi:bc.keg |
|
vm.uma.iwlwifi:bc.keg.name |
Keg name |
vm.uma.iwlwifi:bc.limit |
|
vm.uma.iwlwifi:bc.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.iwlwifi:bc.limit.items |
Current number of allocated items if limit is set |
vm.uma.iwlwifi:bc.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.iwlwifi:bc.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.iwlwifi:bc.limit.sleeps |
Total zone limit sleeps |
vm.uma.iwlwifi:bc.size |
Allocation size |
vm.uma.iwlwifi:bc.stats |
|
vm.uma.iwlwifi:bc.stats.allocs |
Total allocation calls |
vm.uma.iwlwifi:bc.stats.current |
Current number of allocated items |
vm.uma.iwlwifi:bc.stats.fails |
Number of allocation failures |
vm.uma.iwlwifi:bc.stats.frees |
Total free calls |
vm.uma.iwlwifi:bc.stats.xdomain |
Free calls from the wrong domain |
vm.uma.kenv |
|
vm.uma.kenv.bucket_size |
Desired per-cpu cache size |
vm.uma.kenv.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.kenv.domain |
|
vm.uma.kenv.domain.[num] |
|
vm.uma.kenv.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.kenv.domain.[num].imax |
maximum item count in this period |
vm.uma.kenv.domain.[num].imin |
minimum item count in this period |
vm.uma.kenv.domain.[num].limin |
Long time minimum item count |
vm.uma.kenv.domain.[num].nitems |
number of items in this domain |
vm.uma.kenv.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.kenv.domain.[num].wss |
Working set size |
vm.uma.kenv.flags |
Allocator configuration flags |
vm.uma.kenv.keg |
|
vm.uma.kenv.keg.align |
item alignment mask |
vm.uma.kenv.keg.domain |
|
vm.uma.kenv.keg.domain.[num] |
|
vm.uma.kenv.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.kenv.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.kenv.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.kenv.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.kenv.keg.ipers |
items available per-slab |
vm.uma.kenv.keg.name |
Keg name |
vm.uma.kenv.keg.ppera |
pages per-slab allocation |
vm.uma.kenv.keg.reserve |
number of reserved items |
vm.uma.kenv.keg.rsize |
Real object size with alignment |
vm.uma.kenv.limit |
|
vm.uma.kenv.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.kenv.limit.items |
Current number of allocated items if limit is set |
vm.uma.kenv.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.kenv.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.kenv.limit.sleeps |
Total zone limit sleeps |
vm.uma.kenv.size |
Allocation size |
vm.uma.kenv.stats |
|
vm.uma.kenv.stats.allocs |
Total allocation calls |
vm.uma.kenv.stats.current |
Current number of allocated items |
vm.uma.kenv.stats.fails |
Number of allocation failures |
vm.uma.kenv.stats.frees |
Total free calls |
vm.uma.kenv.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ksiginfo |
|
vm.uma.ksiginfo.bucket_size |
Desired per-cpu cache size |
vm.uma.ksiginfo.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ksiginfo.domain |
|
vm.uma.ksiginfo.domain.[num] |
|
vm.uma.ksiginfo.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ksiginfo.domain.[num].imax |
maximum item count in this period |
vm.uma.ksiginfo.domain.[num].imin |
minimum item count in this period |
vm.uma.ksiginfo.domain.[num].limin |
Long time minimum item count |
vm.uma.ksiginfo.domain.[num].nitems |
number of items in this domain |
vm.uma.ksiginfo.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ksiginfo.domain.[num].wss |
Working set size |
vm.uma.ksiginfo.flags |
Allocator configuration flags |
vm.uma.ksiginfo.keg |
|
vm.uma.ksiginfo.keg.align |
item alignment mask |
vm.uma.ksiginfo.keg.domain |
|
vm.uma.ksiginfo.keg.domain.[num] |
|
vm.uma.ksiginfo.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ksiginfo.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ksiginfo.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ksiginfo.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ksiginfo.keg.ipers |
items available per-slab |
vm.uma.ksiginfo.keg.name |
Keg name |
vm.uma.ksiginfo.keg.ppera |
pages per-slab allocation |
vm.uma.ksiginfo.keg.reserve |
number of reserved items |
vm.uma.ksiginfo.keg.rsize |
Real object size with alignment |
vm.uma.ksiginfo.limit |
|
vm.uma.ksiginfo.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ksiginfo.limit.items |
Current number of allocated items if limit is set |
vm.uma.ksiginfo.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ksiginfo.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ksiginfo.limit.sleeps |
Total zone limit sleeps |
vm.uma.ksiginfo.size |
Allocation size |
vm.uma.ksiginfo.stats |
|
vm.uma.ksiginfo.stats.allocs |
Total allocation calls |
vm.uma.ksiginfo.stats.current |
Current number of allocated items |
vm.uma.ksiginfo.stats.fails |
Number of allocation failures |
vm.uma.ksiginfo.stats.frees |
Total free calls |
vm.uma.ksiginfo.stats.xdomain |
Free calls from the wrong domain |
vm.uma.kstack_cache |
|
vm.uma.kstack_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.kstack_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.kstack_cache.domain |
|
vm.uma.kstack_cache.domain.[num] |
|
vm.uma.kstack_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.kstack_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.kstack_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.kstack_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.kstack_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.kstack_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.kstack_cache.domain.[num].wss |
Working set size |
vm.uma.kstack_cache.flags |
Allocator configuration flags |
vm.uma.kstack_cache.keg |
|
vm.uma.kstack_cache.keg.name |
Keg name |
vm.uma.kstack_cache.limit |
|
vm.uma.kstack_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.kstack_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.kstack_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.kstack_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.kstack_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.kstack_cache.size |
Allocation size |
vm.uma.kstack_cache.stats |
|
vm.uma.kstack_cache.stats.allocs |
Total allocation calls |
vm.uma.kstack_cache.stats.current |
Current number of allocated items |
vm.uma.kstack_cache.stats.fails |
Number of allocation failures |
vm.uma.kstack_cache.stats.frees |
Total free calls |
vm.uma.kstack_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.linux_dma_object |
|
vm.uma.linux_dma_object.bucket_size |
Desired per-cpu cache size |
vm.uma.linux_dma_object.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.linux_dma_object.domain |
|
vm.uma.linux_dma_object.domain.[num] |
|
vm.uma.linux_dma_object.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.linux_dma_object.domain.[num].imax |
maximum item count in this period |
vm.uma.linux_dma_object.domain.[num].imin |
minimum item count in this period |
vm.uma.linux_dma_object.domain.[num].limin |
Long time minimum item count |
vm.uma.linux_dma_object.domain.[num].nitems |
number of items in this domain |
vm.uma.linux_dma_object.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.linux_dma_object.domain.[num].wss |
Working set size |
vm.uma.linux_dma_object.flags |
Allocator configuration flags |
vm.uma.linux_dma_object.keg |
|
vm.uma.linux_dma_object.keg.align |
item alignment mask |
vm.uma.linux_dma_object.keg.domain |
|
vm.uma.linux_dma_object.keg.domain.[num] |
|
vm.uma.linux_dma_object.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.linux_dma_object.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.linux_dma_object.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.linux_dma_object.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.linux_dma_object.keg.ipers |
items available per-slab |
vm.uma.linux_dma_object.keg.name |
Keg name |
vm.uma.linux_dma_object.keg.ppera |
pages per-slab allocation |
vm.uma.linux_dma_object.keg.reserve |
number of reserved items |
vm.uma.linux_dma_object.keg.rsize |
Real object size with alignment |
vm.uma.linux_dma_object.limit |
|
vm.uma.linux_dma_object.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.linux_dma_object.limit.items |
Current number of allocated items if limit is set |
vm.uma.linux_dma_object.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.linux_dma_object.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.linux_dma_object.limit.sleeps |
Total zone limit sleeps |
vm.uma.linux_dma_object.size |
Allocation size |
vm.uma.linux_dma_object.stats |
|
vm.uma.linux_dma_object.stats.allocs |
Total allocation calls |
vm.uma.linux_dma_object.stats.current |
Current number of allocated items |
vm.uma.linux_dma_object.stats.fails |
Number of allocation failures |
vm.uma.linux_dma_object.stats.frees |
Total free calls |
vm.uma.linux_dma_object.stats.xdomain |
Free calls from the wrong domain |
vm.uma.linux_dma_pctrie |
|
vm.uma.linux_dma_pctrie.bucket_size |
Desired per-cpu cache size |
vm.uma.linux_dma_pctrie.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.linux_dma_pctrie.domain |
|
vm.uma.linux_dma_pctrie.domain.[num] |
|
vm.uma.linux_dma_pctrie.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.linux_dma_pctrie.domain.[num].imax |
maximum item count in this period |
vm.uma.linux_dma_pctrie.domain.[num].imin |
minimum item count in this period |
vm.uma.linux_dma_pctrie.domain.[num].limin |
Long time minimum item count |
vm.uma.linux_dma_pctrie.domain.[num].nitems |
number of items in this domain |
vm.uma.linux_dma_pctrie.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.linux_dma_pctrie.domain.[num].wss |
Working set size |
vm.uma.linux_dma_pctrie.flags |
Allocator configuration flags |
vm.uma.linux_dma_pctrie.keg |
|
vm.uma.linux_dma_pctrie.keg.align |
item alignment mask |
vm.uma.linux_dma_pctrie.keg.domain |
|
vm.uma.linux_dma_pctrie.keg.domain.[num] |
|
vm.uma.linux_dma_pctrie.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.linux_dma_pctrie.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.linux_dma_pctrie.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.linux_dma_pctrie.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.linux_dma_pctrie.keg.ipers |
items available per-slab |
vm.uma.linux_dma_pctrie.keg.name |
Keg name |
vm.uma.linux_dma_pctrie.keg.ppera |
pages per-slab allocation |
vm.uma.linux_dma_pctrie.keg.reserve |
number of reserved items |
vm.uma.linux_dma_pctrie.keg.rsize |
Real object size with alignment |
vm.uma.linux_dma_pctrie.limit |
|
vm.uma.linux_dma_pctrie.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.linux_dma_pctrie.limit.items |
Current number of allocated items if limit is set |
vm.uma.linux_dma_pctrie.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.linux_dma_pctrie.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.linux_dma_pctrie.limit.sleeps |
Total zone limit sleeps |
vm.uma.linux_dma_pctrie.size |
Allocation size |
vm.uma.linux_dma_pctrie.stats |
|
vm.uma.linux_dma_pctrie.stats.allocs |
Total allocation calls |
vm.uma.linux_dma_pctrie.stats.current |
Current number of allocated items |
vm.uma.linux_dma_pctrie.stats.fails |
Number of allocation failures |
vm.uma.linux_dma_pctrie.stats.frees |
Total free calls |
vm.uma.linux_dma_pctrie.stats.xdomain |
Free calls from the wrong domain |
vm.uma.lkpicurr |
|
vm.uma.lkpicurr.bucket_size |
Desired per-cpu cache size |
vm.uma.lkpicurr.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.lkpicurr.domain |
|
vm.uma.lkpicurr.domain.[num] |
|
vm.uma.lkpicurr.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.lkpicurr.domain.[num].imax |
maximum item count in this period |
vm.uma.lkpicurr.domain.[num].imin |
minimum item count in this period |
vm.uma.lkpicurr.domain.[num].limin |
Long time minimum item count |
vm.uma.lkpicurr.domain.[num].nitems |
number of items in this domain |
vm.uma.lkpicurr.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.lkpicurr.domain.[num].wss |
Working set size |
vm.uma.lkpicurr.flags |
Allocator configuration flags |
vm.uma.lkpicurr.keg |
|
vm.uma.lkpicurr.keg.align |
item alignment mask |
vm.uma.lkpicurr.keg.domain |
|
vm.uma.lkpicurr.keg.domain.[num] |
|
vm.uma.lkpicurr.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.lkpicurr.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.lkpicurr.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.lkpicurr.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.lkpicurr.keg.ipers |
items available per-slab |
vm.uma.lkpicurr.keg.name |
Keg name |
vm.uma.lkpicurr.keg.ppera |
pages per-slab allocation |
vm.uma.lkpicurr.keg.reserve |
number of reserved items |
vm.uma.lkpicurr.keg.rsize |
Real object size with alignment |
vm.uma.lkpicurr.limit |
|
vm.uma.lkpicurr.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.lkpicurr.limit.items |
Current number of allocated items if limit is set |
vm.uma.lkpicurr.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.lkpicurr.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.lkpicurr.limit.sleeps |
Total zone limit sleeps |
vm.uma.lkpicurr.size |
Allocation size |
vm.uma.lkpicurr.stats |
|
vm.uma.lkpicurr.stats.allocs |
Total allocation calls |
vm.uma.lkpicurr.stats.current |
Current number of allocated items |
vm.uma.lkpicurr.stats.fails |
Number of allocation failures |
vm.uma.lkpicurr.stats.frees |
Total free calls |
vm.uma.lkpicurr.stats.xdomain |
Free calls from the wrong domain |
vm.uma.lkpimm |
|
vm.uma.lkpimm.bucket_size |
Desired per-cpu cache size |
vm.uma.lkpimm.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.lkpimm.domain |
|
vm.uma.lkpimm.domain.[num] |
|
vm.uma.lkpimm.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.lkpimm.domain.[num].imax |
maximum item count in this period |
vm.uma.lkpimm.domain.[num].imin |
minimum item count in this period |
vm.uma.lkpimm.domain.[num].limin |
Long time minimum item count |
vm.uma.lkpimm.domain.[num].nitems |
number of items in this domain |
vm.uma.lkpimm.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.lkpimm.domain.[num].wss |
Working set size |
vm.uma.lkpimm.flags |
Allocator configuration flags |
vm.uma.lkpimm.keg |
|
vm.uma.lkpimm.keg.align |
item alignment mask |
vm.uma.lkpimm.keg.domain |
|
vm.uma.lkpimm.keg.domain.[num] |
|
vm.uma.lkpimm.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.lkpimm.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.lkpimm.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.lkpimm.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.lkpimm.keg.ipers |
items available per-slab |
vm.uma.lkpimm.keg.name |
Keg name |
vm.uma.lkpimm.keg.ppera |
pages per-slab allocation |
vm.uma.lkpimm.keg.reserve |
number of reserved items |
vm.uma.lkpimm.keg.rsize |
Real object size with alignment |
vm.uma.lkpimm.limit |
|
vm.uma.lkpimm.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.lkpimm.limit.items |
Current number of allocated items if limit is set |
vm.uma.lkpimm.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.lkpimm.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.lkpimm.limit.sleeps |
Total zone limit sleeps |
vm.uma.lkpimm.size |
Allocation size |
vm.uma.lkpimm.stats |
|
vm.uma.lkpimm.stats.allocs |
Total allocation calls |
vm.uma.lkpimm.stats.current |
Current number of allocated items |
vm.uma.lkpimm.stats.fails |
Number of allocation failures |
vm.uma.lkpimm.stats.frees |
Total free calls |
vm.uma.lkpimm.stats.xdomain |
Free calls from the wrong domain |
vm.uma.malloc_[num] |
|
vm.uma.malloc_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.malloc_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.malloc_[num].domain |
|
vm.uma.malloc_[num].domain.[num] |
|
vm.uma.malloc_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.malloc_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.malloc_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.malloc_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.malloc_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.malloc_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.malloc_[num].domain.[num].wss |
Working set size |
vm.uma.malloc_[num].flags |
Allocator configuration flags |
vm.uma.malloc_[num].keg |
|
vm.uma.malloc_[num].keg.align |
item alignment mask |
vm.uma.malloc_[num].keg.domain |
|
vm.uma.malloc_[num].keg.domain.[num] |
|
vm.uma.malloc_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.malloc_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.malloc_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.malloc_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.malloc_[num].keg.ipers |
items available per-slab |
vm.uma.malloc_[num].keg.name |
Keg name |
vm.uma.malloc_[num].keg.ppera |
pages per-slab allocation |
vm.uma.malloc_[num].keg.reserve |
number of reserved items |
vm.uma.malloc_[num].keg.rsize |
Real object size with alignment |
vm.uma.malloc_[num].limit |
|
vm.uma.malloc_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.malloc_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.malloc_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.malloc_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.malloc_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.malloc_[num].size |
Allocation size |
vm.uma.malloc_[num].stats |
|
vm.uma.malloc_[num].stats.allocs |
Total allocation calls |
vm.uma.malloc_[num].stats.current |
Current number of allocated items |
vm.uma.malloc_[num].stats.fails |
Number of allocation failures |
vm.uma.malloc_[num].stats.frees |
Total free calls |
vm.uma.malloc_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.mbuf |
|
vm.uma.mbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.mbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mbuf.domain |
|
vm.uma.mbuf.domain.[num] |
|
vm.uma.mbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.mbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.mbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.mbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.mbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mbuf.domain.[num].wss |
Working set size |
vm.uma.mbuf.flags |
Allocator configuration flags |
vm.uma.mbuf.keg |
|
vm.uma.mbuf.keg.align |
item alignment mask |
vm.uma.mbuf.keg.domain |
|
vm.uma.mbuf.keg.domain.[num] |
|
vm.uma.mbuf.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mbuf.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mbuf.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mbuf.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mbuf.keg.ipers |
items available per-slab |
vm.uma.mbuf.keg.name |
Keg name |
vm.uma.mbuf.keg.ppera |
pages per-slab allocation |
vm.uma.mbuf.keg.reserve |
number of reserved items |
vm.uma.mbuf.keg.rsize |
Real object size with alignment |
vm.uma.mbuf.limit |
|
vm.uma.mbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.mbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.mbuf.size |
Allocation size |
vm.uma.mbuf.stats |
|
vm.uma.mbuf.stats.allocs |
Total allocation calls |
vm.uma.mbuf.stats.current |
Current number of allocated items |
vm.uma.mbuf.stats.fails |
Number of allocation failures |
vm.uma.mbuf.stats.frees |
Total free calls |
vm.uma.mbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mbuf_cluster |
|
vm.uma.mbuf_cluster.bucket_size |
Desired per-cpu cache size |
vm.uma.mbuf_cluster.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mbuf_cluster.domain |
|
vm.uma.mbuf_cluster.domain.[num] |
|
vm.uma.mbuf_cluster.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mbuf_cluster.domain.[num].imax |
maximum item count in this period |
vm.uma.mbuf_cluster.domain.[num].imin |
minimum item count in this period |
vm.uma.mbuf_cluster.domain.[num].limin |
Long time minimum item count |
vm.uma.mbuf_cluster.domain.[num].nitems |
number of items in this domain |
vm.uma.mbuf_cluster.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mbuf_cluster.domain.[num].wss |
Working set size |
vm.uma.mbuf_cluster.flags |
Allocator configuration flags |
vm.uma.mbuf_cluster.keg |
|
vm.uma.mbuf_cluster.keg.align |
item alignment mask |
vm.uma.mbuf_cluster.keg.domain |
|
vm.uma.mbuf_cluster.keg.domain.[num] |
|
vm.uma.mbuf_cluster.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mbuf_cluster.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mbuf_cluster.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mbuf_cluster.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mbuf_cluster.keg.ipers |
items available per-slab |
vm.uma.mbuf_cluster.keg.name |
Keg name |
vm.uma.mbuf_cluster.keg.ppera |
pages per-slab allocation |
vm.uma.mbuf_cluster.keg.reserve |
number of reserved items |
vm.uma.mbuf_cluster.keg.rsize |
Real object size with alignment |
vm.uma.mbuf_cluster.limit |
|
vm.uma.mbuf_cluster.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mbuf_cluster.limit.items |
Current number of allocated items if limit is set |
vm.uma.mbuf_cluster.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mbuf_cluster.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mbuf_cluster.limit.sleeps |
Total zone limit sleeps |
vm.uma.mbuf_cluster.size |
Allocation size |
vm.uma.mbuf_cluster.stats |
|
vm.uma.mbuf_cluster.stats.allocs |
Total allocation calls |
vm.uma.mbuf_cluster.stats.current |
Current number of allocated items |
vm.uma.mbuf_cluster.stats.fails |
Number of allocation failures |
vm.uma.mbuf_cluster.stats.frees |
Total free calls |
vm.uma.mbuf_cluster.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mbuf_jumbo_[num]k |
|
vm.uma.mbuf_jumbo_[num]k.bucket_size |
Desired per-cpu cache size |
vm.uma.mbuf_jumbo_[num]k.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mbuf_jumbo_[num]k.domain |
|
vm.uma.mbuf_jumbo_[num]k.domain.[num] |
|
vm.uma.mbuf_jumbo_[num]k.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mbuf_jumbo_[num]k.domain.[num].imax |
maximum item count in this period |
vm.uma.mbuf_jumbo_[num]k.domain.[num].imin |
minimum item count in this period |
vm.uma.mbuf_jumbo_[num]k.domain.[num].limin |
Long time minimum item count |
vm.uma.mbuf_jumbo_[num]k.domain.[num].nitems |
number of items in this domain |
vm.uma.mbuf_jumbo_[num]k.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mbuf_jumbo_[num]k.domain.[num].wss |
Working set size |
vm.uma.mbuf_jumbo_[num]k.flags |
Allocator configuration flags |
vm.uma.mbuf_jumbo_[num]k.keg |
|
vm.uma.mbuf_jumbo_[num]k.keg.align |
item alignment mask |
vm.uma.mbuf_jumbo_[num]k.keg.domain |
|
vm.uma.mbuf_jumbo_[num]k.keg.domain.[num] |
|
vm.uma.mbuf_jumbo_[num]k.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mbuf_jumbo_[num]k.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mbuf_jumbo_[num]k.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mbuf_jumbo_[num]k.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mbuf_jumbo_[num]k.keg.ipers |
items available per-slab |
vm.uma.mbuf_jumbo_[num]k.keg.name |
Keg name |
vm.uma.mbuf_jumbo_[num]k.keg.ppera |
pages per-slab allocation |
vm.uma.mbuf_jumbo_[num]k.keg.reserve |
number of reserved items |
vm.uma.mbuf_jumbo_[num]k.keg.rsize |
Real object size with alignment |
vm.uma.mbuf_jumbo_[num]k.limit |
|
vm.uma.mbuf_jumbo_[num]k.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mbuf_jumbo_[num]k.limit.items |
Current number of allocated items if limit is set |
vm.uma.mbuf_jumbo_[num]k.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mbuf_jumbo_[num]k.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mbuf_jumbo_[num]k.limit.sleeps |
Total zone limit sleeps |
vm.uma.mbuf_jumbo_[num]k.size |
Allocation size |
vm.uma.mbuf_jumbo_[num]k.stats |
|
vm.uma.mbuf_jumbo_[num]k.stats.allocs |
Total allocation calls |
vm.uma.mbuf_jumbo_[num]k.stats.current |
Current number of allocated items |
vm.uma.mbuf_jumbo_[num]k.stats.fails |
Number of allocation failures |
vm.uma.mbuf_jumbo_[num]k.stats.frees |
Total free calls |
vm.uma.mbuf_jumbo_[num]k.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mbuf_jumbo_page |
|
vm.uma.mbuf_jumbo_page.bucket_size |
Desired per-cpu cache size |
vm.uma.mbuf_jumbo_page.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mbuf_jumbo_page.domain |
|
vm.uma.mbuf_jumbo_page.domain.[num] |
|
vm.uma.mbuf_jumbo_page.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mbuf_jumbo_page.domain.[num].imax |
maximum item count in this period |
vm.uma.mbuf_jumbo_page.domain.[num].imin |
minimum item count in this period |
vm.uma.mbuf_jumbo_page.domain.[num].limin |
Long time minimum item count |
vm.uma.mbuf_jumbo_page.domain.[num].nitems |
number of items in this domain |
vm.uma.mbuf_jumbo_page.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mbuf_jumbo_page.domain.[num].wss |
Working set size |
vm.uma.mbuf_jumbo_page.flags |
Allocator configuration flags |
vm.uma.mbuf_jumbo_page.keg |
|
vm.uma.mbuf_jumbo_page.keg.align |
item alignment mask |
vm.uma.mbuf_jumbo_page.keg.domain |
|
vm.uma.mbuf_jumbo_page.keg.domain.[num] |
|
vm.uma.mbuf_jumbo_page.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mbuf_jumbo_page.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mbuf_jumbo_page.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mbuf_jumbo_page.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mbuf_jumbo_page.keg.ipers |
items available per-slab |
vm.uma.mbuf_jumbo_page.keg.name |
Keg name |
vm.uma.mbuf_jumbo_page.keg.ppera |
pages per-slab allocation |
vm.uma.mbuf_jumbo_page.keg.reserve |
number of reserved items |
vm.uma.mbuf_jumbo_page.keg.rsize |
Real object size with alignment |
vm.uma.mbuf_jumbo_page.limit |
|
vm.uma.mbuf_jumbo_page.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mbuf_jumbo_page.limit.items |
Current number of allocated items if limit is set |
vm.uma.mbuf_jumbo_page.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mbuf_jumbo_page.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mbuf_jumbo_page.limit.sleeps |
Total zone limit sleeps |
vm.uma.mbuf_jumbo_page.size |
Allocation size |
vm.uma.mbuf_jumbo_page.stats |
|
vm.uma.mbuf_jumbo_page.stats.allocs |
Total allocation calls |
vm.uma.mbuf_jumbo_page.stats.current |
Current number of allocated items |
vm.uma.mbuf_jumbo_page.stats.fails |
Number of allocation failures |
vm.uma.mbuf_jumbo_page.stats.frees |
Total free calls |
vm.uma.mbuf_jumbo_page.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mbuf_packet |
|
vm.uma.mbuf_packet.bucket_size |
Desired per-cpu cache size |
vm.uma.mbuf_packet.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mbuf_packet.domain |
|
vm.uma.mbuf_packet.domain.[num] |
|
vm.uma.mbuf_packet.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mbuf_packet.domain.[num].imax |
maximum item count in this period |
vm.uma.mbuf_packet.domain.[num].imin |
minimum item count in this period |
vm.uma.mbuf_packet.domain.[num].limin |
Long time minimum item count |
vm.uma.mbuf_packet.domain.[num].nitems |
number of items in this domain |
vm.uma.mbuf_packet.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mbuf_packet.domain.[num].wss |
Working set size |
vm.uma.mbuf_packet.flags |
Allocator configuration flags |
vm.uma.mbuf_packet.keg |
|
vm.uma.mbuf_packet.keg.align |
item alignment mask |
vm.uma.mbuf_packet.keg.domain |
|
vm.uma.mbuf_packet.keg.domain.[num] |
|
vm.uma.mbuf_packet.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mbuf_packet.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mbuf_packet.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mbuf_packet.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mbuf_packet.keg.ipers |
items available per-slab |
vm.uma.mbuf_packet.keg.name |
Keg name |
vm.uma.mbuf_packet.keg.ppera |
pages per-slab allocation |
vm.uma.mbuf_packet.keg.reserve |
number of reserved items |
vm.uma.mbuf_packet.keg.rsize |
Real object size with alignment |
vm.uma.mbuf_packet.limit |
|
vm.uma.mbuf_packet.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mbuf_packet.limit.items |
Current number of allocated items if limit is set |
vm.uma.mbuf_packet.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mbuf_packet.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mbuf_packet.limit.sleeps |
Total zone limit sleeps |
vm.uma.mbuf_packet.size |
Allocation size |
vm.uma.mbuf_packet.stats |
|
vm.uma.mbuf_packet.stats.allocs |
Total allocation calls |
vm.uma.mbuf_packet.stats.current |
Current number of allocated items |
vm.uma.mbuf_packet.stats.fails |
Number of allocation failures |
vm.uma.mbuf_packet.stats.frees |
Total free calls |
vm.uma.mbuf_packet.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mdpbuf |
|
vm.uma.mdpbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.mdpbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mdpbuf.domain |
|
vm.uma.mdpbuf.domain.[num] |
|
vm.uma.mdpbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mdpbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.mdpbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.mdpbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.mdpbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.mdpbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mdpbuf.domain.[num].wss |
Working set size |
vm.uma.mdpbuf.flags |
Allocator configuration flags |
vm.uma.mdpbuf.keg |
|
vm.uma.mdpbuf.keg.align |
item alignment mask |
vm.uma.mdpbuf.keg.domain |
|
vm.uma.mdpbuf.keg.domain.[num] |
|
vm.uma.mdpbuf.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mdpbuf.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mdpbuf.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mdpbuf.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mdpbuf.keg.ipers |
items available per-slab |
vm.uma.mdpbuf.keg.name |
Keg name |
vm.uma.mdpbuf.keg.ppera |
pages per-slab allocation |
vm.uma.mdpbuf.keg.reserve |
number of reserved items |
vm.uma.mdpbuf.keg.rsize |
Real object size with alignment |
vm.uma.mdpbuf.limit |
|
vm.uma.mdpbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mdpbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.mdpbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mdpbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mdpbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.mdpbuf.size |
Allocation size |
vm.uma.mdpbuf.stats |
|
vm.uma.mdpbuf.stats.allocs |
Total allocation calls |
vm.uma.mdpbuf.stats.current |
Current number of allocated items |
vm.uma.mdpbuf.stats.fails |
Number of allocation failures |
vm.uma.mdpbuf.stats.frees |
Total free calls |
vm.uma.mdpbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.metaslab_alloc_trace_cache |
|
vm.uma.metaslab_alloc_trace_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.metaslab_alloc_trace_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.metaslab_alloc_trace_cache.domain |
|
vm.uma.metaslab_alloc_trace_cache.domain.[num] |
|
vm.uma.metaslab_alloc_trace_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.metaslab_alloc_trace_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.metaslab_alloc_trace_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.metaslab_alloc_trace_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.metaslab_alloc_trace_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.metaslab_alloc_trace_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.metaslab_alloc_trace_cache.domain.[num].wss |
Working set size |
vm.uma.metaslab_alloc_trace_cache.flags |
Allocator configuration flags |
vm.uma.metaslab_alloc_trace_cache.keg |
|
vm.uma.metaslab_alloc_trace_cache.keg.align |
item alignment mask |
vm.uma.metaslab_alloc_trace_cache.keg.domain |
|
vm.uma.metaslab_alloc_trace_cache.keg.domain.[num] |
|
vm.uma.metaslab_alloc_trace_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.metaslab_alloc_trace_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.metaslab_alloc_trace_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.metaslab_alloc_trace_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.metaslab_alloc_trace_cache.keg.ipers |
items available per-slab |
vm.uma.metaslab_alloc_trace_cache.keg.name |
Keg name |
vm.uma.metaslab_alloc_trace_cache.keg.ppera |
pages per-slab allocation |
vm.uma.metaslab_alloc_trace_cache.keg.reserve |
number of reserved items |
vm.uma.metaslab_alloc_trace_cache.keg.rsize |
Real object size with alignment |
vm.uma.metaslab_alloc_trace_cache.limit |
|
vm.uma.metaslab_alloc_trace_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.metaslab_alloc_trace_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.metaslab_alloc_trace_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.metaslab_alloc_trace_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.metaslab_alloc_trace_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.metaslab_alloc_trace_cache.size |
Allocation size |
vm.uma.metaslab_alloc_trace_cache.stats |
|
vm.uma.metaslab_alloc_trace_cache.stats.allocs |
Total allocation calls |
vm.uma.metaslab_alloc_trace_cache.stats.current |
Current number of allocated items |
vm.uma.metaslab_alloc_trace_cache.stats.fails |
Number of allocation failures |
vm.uma.metaslab_alloc_trace_cache.stats.frees |
Total free calls |
vm.uma.metaslab_alloc_trace_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mqnode |
|
vm.uma.mqnode.bucket_size |
Desired per-cpu cache size |
vm.uma.mqnode.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mqnode.domain |
|
vm.uma.mqnode.domain.[num] |
|
vm.uma.mqnode.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mqnode.domain.[num].imax |
maximum item count in this period |
vm.uma.mqnode.domain.[num].imin |
minimum item count in this period |
vm.uma.mqnode.domain.[num].limin |
Long time minimum item count |
vm.uma.mqnode.domain.[num].nitems |
number of items in this domain |
vm.uma.mqnode.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mqnode.domain.[num].wss |
Working set size |
vm.uma.mqnode.flags |
Allocator configuration flags |
vm.uma.mqnode.keg |
|
vm.uma.mqnode.keg.align |
item alignment mask |
vm.uma.mqnode.keg.domain |
|
vm.uma.mqnode.keg.domain.[num] |
|
vm.uma.mqnode.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mqnode.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mqnode.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mqnode.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mqnode.keg.ipers |
items available per-slab |
vm.uma.mqnode.keg.name |
Keg name |
vm.uma.mqnode.keg.ppera |
pages per-slab allocation |
vm.uma.mqnode.keg.reserve |
number of reserved items |
vm.uma.mqnode.keg.rsize |
Real object size with alignment |
vm.uma.mqnode.limit |
|
vm.uma.mqnode.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mqnode.limit.items |
Current number of allocated items if limit is set |
vm.uma.mqnode.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mqnode.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mqnode.limit.sleeps |
Total zone limit sleeps |
vm.uma.mqnode.size |
Allocation size |
vm.uma.mqnode.stats |
|
vm.uma.mqnode.stats.allocs |
Total allocation calls |
vm.uma.mqnode.stats.current |
Current number of allocated items |
vm.uma.mqnode.stats.fails |
Number of allocation failures |
vm.uma.mqnode.stats.frees |
Total free calls |
vm.uma.mqnode.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mqnotifier |
|
vm.uma.mqnotifier.bucket_size |
Desired per-cpu cache size |
vm.uma.mqnotifier.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mqnotifier.domain |
|
vm.uma.mqnotifier.domain.[num] |
|
vm.uma.mqnotifier.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mqnotifier.domain.[num].imax |
maximum item count in this period |
vm.uma.mqnotifier.domain.[num].imin |
minimum item count in this period |
vm.uma.mqnotifier.domain.[num].limin |
Long time minimum item count |
vm.uma.mqnotifier.domain.[num].nitems |
number of items in this domain |
vm.uma.mqnotifier.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mqnotifier.domain.[num].wss |
Working set size |
vm.uma.mqnotifier.flags |
Allocator configuration flags |
vm.uma.mqnotifier.keg |
|
vm.uma.mqnotifier.keg.align |
item alignment mask |
vm.uma.mqnotifier.keg.domain |
|
vm.uma.mqnotifier.keg.domain.[num] |
|
vm.uma.mqnotifier.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mqnotifier.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mqnotifier.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mqnotifier.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mqnotifier.keg.ipers |
items available per-slab |
vm.uma.mqnotifier.keg.name |
Keg name |
vm.uma.mqnotifier.keg.ppera |
pages per-slab allocation |
vm.uma.mqnotifier.keg.reserve |
number of reserved items |
vm.uma.mqnotifier.keg.rsize |
Real object size with alignment |
vm.uma.mqnotifier.limit |
|
vm.uma.mqnotifier.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mqnotifier.limit.items |
Current number of allocated items if limit is set |
vm.uma.mqnotifier.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mqnotifier.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mqnotifier.limit.sleeps |
Total zone limit sleeps |
vm.uma.mqnotifier.size |
Allocation size |
vm.uma.mqnotifier.stats |
|
vm.uma.mqnotifier.stats.allocs |
Total allocation calls |
vm.uma.mqnotifier.stats.current |
Current number of allocated items |
vm.uma.mqnotifier.stats.fails |
Number of allocation failures |
vm.uma.mqnotifier.stats.frees |
Total free calls |
vm.uma.mqnotifier.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mqueue |
|
vm.uma.mqueue.bucket_size |
Desired per-cpu cache size |
vm.uma.mqueue.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mqueue.domain |
|
vm.uma.mqueue.domain.[num] |
|
vm.uma.mqueue.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mqueue.domain.[num].imax |
maximum item count in this period |
vm.uma.mqueue.domain.[num].imin |
minimum item count in this period |
vm.uma.mqueue.domain.[num].limin |
Long time minimum item count |
vm.uma.mqueue.domain.[num].nitems |
number of items in this domain |
vm.uma.mqueue.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mqueue.domain.[num].wss |
Working set size |
vm.uma.mqueue.flags |
Allocator configuration flags |
vm.uma.mqueue.keg |
|
vm.uma.mqueue.keg.align |
item alignment mask |
vm.uma.mqueue.keg.domain |
|
vm.uma.mqueue.keg.domain.[num] |
|
vm.uma.mqueue.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mqueue.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mqueue.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mqueue.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mqueue.keg.ipers |
items available per-slab |
vm.uma.mqueue.keg.name |
Keg name |
vm.uma.mqueue.keg.ppera |
pages per-slab allocation |
vm.uma.mqueue.keg.reserve |
number of reserved items |
vm.uma.mqueue.keg.rsize |
Real object size with alignment |
vm.uma.mqueue.limit |
|
vm.uma.mqueue.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mqueue.limit.items |
Current number of allocated items if limit is set |
vm.uma.mqueue.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mqueue.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mqueue.limit.sleeps |
Total zone limit sleeps |
vm.uma.mqueue.size |
Allocation size |
vm.uma.mqueue.stats |
|
vm.uma.mqueue.stats.allocs |
Total allocation calls |
vm.uma.mqueue.stats.current |
Current number of allocated items |
vm.uma.mqueue.stats.fails |
Number of allocation failures |
vm.uma.mqueue.stats.frees |
Total free calls |
vm.uma.mqueue.stats.xdomain |
Free calls from the wrong domain |
vm.uma.mvdata |
|
vm.uma.mvdata.bucket_size |
Desired per-cpu cache size |
vm.uma.mvdata.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.mvdata.domain |
|
vm.uma.mvdata.domain.[num] |
|
vm.uma.mvdata.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.mvdata.domain.[num].imax |
maximum item count in this period |
vm.uma.mvdata.domain.[num].imin |
minimum item count in this period |
vm.uma.mvdata.domain.[num].limin |
Long time minimum item count |
vm.uma.mvdata.domain.[num].nitems |
number of items in this domain |
vm.uma.mvdata.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.mvdata.domain.[num].wss |
Working set size |
vm.uma.mvdata.flags |
Allocator configuration flags |
vm.uma.mvdata.keg |
|
vm.uma.mvdata.keg.align |
item alignment mask |
vm.uma.mvdata.keg.domain |
|
vm.uma.mvdata.keg.domain.[num] |
|
vm.uma.mvdata.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.mvdata.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.mvdata.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.mvdata.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.mvdata.keg.ipers |
items available per-slab |
vm.uma.mvdata.keg.name |
Keg name |
vm.uma.mvdata.keg.ppera |
pages per-slab allocation |
vm.uma.mvdata.keg.reserve |
number of reserved items |
vm.uma.mvdata.keg.rsize |
Real object size with alignment |
vm.uma.mvdata.limit |
|
vm.uma.mvdata.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.mvdata.limit.items |
Current number of allocated items if limit is set |
vm.uma.mvdata.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.mvdata.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.mvdata.limit.sleeps |
Total zone limit sleeps |
vm.uma.mvdata.size |
Allocation size |
vm.uma.mvdata.stats |
|
vm.uma.mvdata.stats.allocs |
Total allocation calls |
vm.uma.mvdata.stats.current |
Current number of allocated items |
vm.uma.mvdata.stats.fails |
Number of allocation failures |
vm.uma.mvdata.stats.frees |
Total free calls |
vm.uma.mvdata.stats.xdomain |
Free calls from the wrong domain |
vm.uma.netlink |
|
vm.uma.netlink.bucket_size |
Desired per-cpu cache size |
vm.uma.netlink.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.netlink.domain |
|
vm.uma.netlink.domain.[num] |
|
vm.uma.netlink.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.netlink.domain.[num].imax |
maximum item count in this period |
vm.uma.netlink.domain.[num].imin |
minimum item count in this period |
vm.uma.netlink.domain.[num].limin |
Long time minimum item count |
vm.uma.netlink.domain.[num].nitems |
number of items in this domain |
vm.uma.netlink.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.netlink.domain.[num].wss |
Working set size |
vm.uma.netlink.flags |
Allocator configuration flags |
vm.uma.netlink.keg |
|
vm.uma.netlink.keg.align |
item alignment mask |
vm.uma.netlink.keg.domain |
|
vm.uma.netlink.keg.domain.[num] |
|
vm.uma.netlink.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.netlink.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.netlink.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.netlink.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.netlink.keg.ipers |
items available per-slab |
vm.uma.netlink.keg.name |
Keg name |
vm.uma.netlink.keg.ppera |
pages per-slab allocation |
vm.uma.netlink.keg.reserve |
number of reserved items |
vm.uma.netlink.keg.rsize |
Real object size with alignment |
vm.uma.netlink.limit |
|
vm.uma.netlink.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.netlink.limit.items |
Current number of allocated items if limit is set |
vm.uma.netlink.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.netlink.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.netlink.limit.sleeps |
Total zone limit sleeps |
vm.uma.netlink.size |
Allocation size |
vm.uma.netlink.stats |
|
vm.uma.netlink.stats.allocs |
Total allocation calls |
vm.uma.netlink.stats.current |
Current number of allocated items |
vm.uma.netlink.stats.fails |
Number of allocation failures |
vm.uma.netlink.stats.frees |
Total free calls |
vm.uma.netlink.stats.xdomain |
Free calls from the wrong domain |
vm.uma.nfspbuf |
|
vm.uma.nfspbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.nfspbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.nfspbuf.domain |
|
vm.uma.nfspbuf.domain.[num] |
|
vm.uma.nfspbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.nfspbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.nfspbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.nfspbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.nfspbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.nfspbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.nfspbuf.domain.[num].wss |
Working set size |
vm.uma.nfspbuf.flags |
Allocator configuration flags |
vm.uma.nfspbuf.keg |
|
vm.uma.nfspbuf.keg.align |
item alignment mask |
vm.uma.nfspbuf.keg.domain |
|
vm.uma.nfspbuf.keg.domain.[num] |
|
vm.uma.nfspbuf.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.nfspbuf.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.nfspbuf.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.nfspbuf.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.nfspbuf.keg.ipers |
items available per-slab |
vm.uma.nfspbuf.keg.name |
Keg name |
vm.uma.nfspbuf.keg.ppera |
pages per-slab allocation |
vm.uma.nfspbuf.keg.reserve |
number of reserved items |
vm.uma.nfspbuf.keg.rsize |
Real object size with alignment |
vm.uma.nfspbuf.limit |
|
vm.uma.nfspbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.nfspbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.nfspbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.nfspbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.nfspbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.nfspbuf.size |
Allocation size |
vm.uma.nfspbuf.stats |
|
vm.uma.nfspbuf.stats.allocs |
Total allocation calls |
vm.uma.nfspbuf.stats.current |
Current number of allocated items |
vm.uma.nfspbuf.stats.fails |
Number of allocation failures |
vm.uma.nfspbuf.stats.frees |
Total free calls |
vm.uma.nfspbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.nvidia_stack_t |
|
vm.uma.nvidia_stack_t.bucket_size |
Desired per-cpu cache size |
vm.uma.nvidia_stack_t.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.nvidia_stack_t.domain |
|
vm.uma.nvidia_stack_t.domain.[num] |
|
vm.uma.nvidia_stack_t.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.nvidia_stack_t.domain.[num].imax |
maximum item count in this period |
vm.uma.nvidia_stack_t.domain.[num].imin |
minimum item count in this period |
vm.uma.nvidia_stack_t.domain.[num].limin |
Long time minimum item count |
vm.uma.nvidia_stack_t.domain.[num].nitems |
number of items in this domain |
vm.uma.nvidia_stack_t.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.nvidia_stack_t.domain.[num].wss |
Working set size |
vm.uma.nvidia_stack_t.flags |
Allocator configuration flags |
vm.uma.nvidia_stack_t.keg |
|
vm.uma.nvidia_stack_t.keg.align |
item alignment mask |
vm.uma.nvidia_stack_t.keg.domain |
|
vm.uma.nvidia_stack_t.keg.domain.[num] |
|
vm.uma.nvidia_stack_t.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.nvidia_stack_t.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.nvidia_stack_t.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.nvidia_stack_t.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.nvidia_stack_t.keg.ipers |
items available per-slab |
vm.uma.nvidia_stack_t.keg.name |
Keg name |
vm.uma.nvidia_stack_t.keg.ppera |
pages per-slab allocation |
vm.uma.nvidia_stack_t.keg.reserve |
number of reserved items |
vm.uma.nvidia_stack_t.keg.rsize |
Real object size with alignment |
vm.uma.nvidia_stack_t.limit |
|
vm.uma.nvidia_stack_t.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.nvidia_stack_t.limit.items |
Current number of allocated items if limit is set |
vm.uma.nvidia_stack_t.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.nvidia_stack_t.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.nvidia_stack_t.limit.sleeps |
Total zone limit sleeps |
vm.uma.nvidia_stack_t.size |
Allocation size |
vm.uma.nvidia_stack_t.stats |
|
vm.uma.nvidia_stack_t.stats.allocs |
Total allocation calls |
vm.uma.nvidia_stack_t.stats.current |
Current number of allocated items |
vm.uma.nvidia_stack_t.stats.fails |
Number of allocation failures |
vm.uma.nvidia_stack_t.stats.frees |
Total free calls |
vm.uma.nvidia_stack_t.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pbuf |
|
vm.uma.pbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.pbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pbuf.domain |
|
vm.uma.pbuf.domain.[num] |
|
vm.uma.pbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.pbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.pbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.pbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.pbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pbuf.domain.[num].wss |
Working set size |
vm.uma.pbuf.flags |
Allocator configuration flags |
vm.uma.pbuf.keg |
|
vm.uma.pbuf.keg.align |
item alignment mask |
vm.uma.pbuf.keg.domain |
|
vm.uma.pbuf.keg.domain.[num] |
|
vm.uma.pbuf.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pbuf.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pbuf.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pbuf.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pbuf.keg.ipers |
items available per-slab |
vm.uma.pbuf.keg.name |
Keg name |
vm.uma.pbuf.keg.ppera |
pages per-slab allocation |
vm.uma.pbuf.keg.reserve |
number of reserved items |
vm.uma.pbuf.keg.rsize |
Real object size with alignment |
vm.uma.pbuf.limit |
|
vm.uma.pbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.pbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.pbuf.size |
Allocation size |
vm.uma.pbuf.stats |
|
vm.uma.pbuf.stats.allocs |
Total allocation calls |
vm.uma.pbuf.stats.current |
Current number of allocated items |
vm.uma.pbuf.stats.fails |
Number of allocation failures |
vm.uma.pbuf.stats.frees |
Total free calls |
vm.uma.pbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pcpu_[num] |
|
vm.uma.pcpu_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pcpu_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pcpu_[num].domain |
|
vm.uma.pcpu_[num].domain.[num] |
|
vm.uma.pcpu_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pcpu_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pcpu_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pcpu_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pcpu_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pcpu_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pcpu_[num].domain.[num].wss |
Working set size |
vm.uma.pcpu_[num].flags |
Allocator configuration flags |
vm.uma.pcpu_[num].keg |
|
vm.uma.pcpu_[num].keg.align |
item alignment mask |
vm.uma.pcpu_[num].keg.domain |
|
vm.uma.pcpu_[num].keg.domain.[num] |
|
vm.uma.pcpu_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pcpu_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pcpu_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pcpu_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pcpu_[num].keg.ipers |
items available per-slab |
vm.uma.pcpu_[num].keg.name |
Keg name |
vm.uma.pcpu_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pcpu_[num].keg.reserve |
number of reserved items |
vm.uma.pcpu_[num].keg.rsize |
Real object size with alignment |
vm.uma.pcpu_[num].limit |
|
vm.uma.pcpu_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pcpu_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pcpu_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pcpu_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pcpu_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pcpu_[num].size |
Allocation size |
vm.uma.pcpu_[num].stats |
|
vm.uma.pcpu_[num].stats.allocs |
Total allocation calls |
vm.uma.pcpu_[num].stats.current |
Current number of allocated items |
vm.uma.pcpu_[num].stats.fails |
Number of allocation failures |
vm.uma.pcpu_[num].stats.frees |
Total free calls |
vm.uma.pcpu_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_frag_entries |
|
vm.uma.pf_frag_entries.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_frag_entries.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_frag_entries.domain |
|
vm.uma.pf_frag_entries.domain.[num] |
|
vm.uma.pf_frag_entries.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_frag_entries.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_frag_entries.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_frag_entries.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_frag_entries.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_frag_entries.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_frag_entries.domain.[num].wss |
Working set size |
vm.uma.pf_frag_entries.flags |
Allocator configuration flags |
vm.uma.pf_frag_entries.keg |
|
vm.uma.pf_frag_entries.keg.align |
item alignment mask |
vm.uma.pf_frag_entries.keg.domain |
|
vm.uma.pf_frag_entries.keg.domain.[num] |
|
vm.uma.pf_frag_entries.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_frag_entries.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_frag_entries.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_frag_entries.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_frag_entries.keg.ipers |
items available per-slab |
vm.uma.pf_frag_entries.keg.name |
Keg name |
vm.uma.pf_frag_entries.keg.ppera |
pages per-slab allocation |
vm.uma.pf_frag_entries.keg.reserve |
number of reserved items |
vm.uma.pf_frag_entries.keg.rsize |
Real object size with alignment |
vm.uma.pf_frag_entries.limit |
|
vm.uma.pf_frag_entries.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_frag_entries.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_frag_entries.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_frag_entries.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_frag_entries.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_frag_entries.size |
Allocation size |
vm.uma.pf_frag_entries.stats |
|
vm.uma.pf_frag_entries.stats.allocs |
Total allocation calls |
vm.uma.pf_frag_entries.stats.current |
Current number of allocated items |
vm.uma.pf_frag_entries.stats.fails |
Number of allocation failures |
vm.uma.pf_frag_entries.stats.frees |
Total free calls |
vm.uma.pf_frag_entries.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_frag_entries_[num] |
|
vm.uma.pf_frag_entries_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_frag_entries_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_frag_entries_[num].domain |
|
vm.uma.pf_frag_entries_[num].domain.[num] |
|
vm.uma.pf_frag_entries_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_frag_entries_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_frag_entries_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_frag_entries_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_frag_entries_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_frag_entries_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_frag_entries_[num].domain.[num].wss |
Working set size |
vm.uma.pf_frag_entries_[num].flags |
Allocator configuration flags |
vm.uma.pf_frag_entries_[num].keg |
|
vm.uma.pf_frag_entries_[num].keg.align |
item alignment mask |
vm.uma.pf_frag_entries_[num].keg.domain |
|
vm.uma.pf_frag_entries_[num].keg.domain.[num] |
|
vm.uma.pf_frag_entries_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_frag_entries_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_frag_entries_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_frag_entries_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_frag_entries_[num].keg.ipers |
items available per-slab |
vm.uma.pf_frag_entries_[num].keg.name |
Keg name |
vm.uma.pf_frag_entries_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_frag_entries_[num].keg.reserve |
number of reserved items |
vm.uma.pf_frag_entries_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_frag_entries_[num].limit |
|
vm.uma.pf_frag_entries_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_frag_entries_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_frag_entries_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_frag_entries_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_frag_entries_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_frag_entries_[num].size |
Allocation size |
vm.uma.pf_frag_entries_[num].stats |
|
vm.uma.pf_frag_entries_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_frag_entries_[num].stats.current |
Current number of allocated items |
vm.uma.pf_frag_entries_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_frag_entries_[num].stats.frees |
Total free calls |
vm.uma.pf_frag_entries_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_frags |
|
vm.uma.pf_frags.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_frags.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_frags.domain |
|
vm.uma.pf_frags.domain.[num] |
|
vm.uma.pf_frags.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_frags.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_frags.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_frags.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_frags.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_frags.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_frags.domain.[num].wss |
Working set size |
vm.uma.pf_frags.flags |
Allocator configuration flags |
vm.uma.pf_frags.keg |
|
vm.uma.pf_frags.keg.align |
item alignment mask |
vm.uma.pf_frags.keg.domain |
|
vm.uma.pf_frags.keg.domain.[num] |
|
vm.uma.pf_frags.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_frags.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_frags.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_frags.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_frags.keg.ipers |
items available per-slab |
vm.uma.pf_frags.keg.name |
Keg name |
vm.uma.pf_frags.keg.ppera |
pages per-slab allocation |
vm.uma.pf_frags.keg.reserve |
number of reserved items |
vm.uma.pf_frags.keg.rsize |
Real object size with alignment |
vm.uma.pf_frags.limit |
|
vm.uma.pf_frags.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_frags.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_frags.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_frags.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_frags.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_frags.size |
Allocation size |
vm.uma.pf_frags.stats |
|
vm.uma.pf_frags.stats.allocs |
Total allocation calls |
vm.uma.pf_frags.stats.current |
Current number of allocated items |
vm.uma.pf_frags.stats.fails |
Number of allocation failures |
vm.uma.pf_frags.stats.frees |
Total free calls |
vm.uma.pf_frags.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_frags_[num] |
|
vm.uma.pf_frags_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_frags_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_frags_[num].domain |
|
vm.uma.pf_frags_[num].domain.[num] |
|
vm.uma.pf_frags_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_frags_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_frags_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_frags_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_frags_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_frags_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_frags_[num].domain.[num].wss |
Working set size |
vm.uma.pf_frags_[num].flags |
Allocator configuration flags |
vm.uma.pf_frags_[num].keg |
|
vm.uma.pf_frags_[num].keg.align |
item alignment mask |
vm.uma.pf_frags_[num].keg.domain |
|
vm.uma.pf_frags_[num].keg.domain.[num] |
|
vm.uma.pf_frags_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_frags_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_frags_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_frags_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_frags_[num].keg.ipers |
items available per-slab |
vm.uma.pf_frags_[num].keg.name |
Keg name |
vm.uma.pf_frags_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_frags_[num].keg.reserve |
number of reserved items |
vm.uma.pf_frags_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_frags_[num].limit |
|
vm.uma.pf_frags_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_frags_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_frags_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_frags_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_frags_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_frags_[num].size |
Allocation size |
vm.uma.pf_frags_[num].stats |
|
vm.uma.pf_frags_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_frags_[num].stats.current |
Current number of allocated items |
vm.uma.pf_frags_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_frags_[num].stats.frees |
Total free calls |
vm.uma.pf_frags_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_mtags |
|
vm.uma.pf_mtags.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_mtags.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_mtags.domain |
|
vm.uma.pf_mtags.domain.[num] |
|
vm.uma.pf_mtags.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_mtags.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_mtags.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_mtags.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_mtags.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_mtags.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_mtags.domain.[num].wss |
Working set size |
vm.uma.pf_mtags.flags |
Allocator configuration flags |
vm.uma.pf_mtags.keg |
|
vm.uma.pf_mtags.keg.align |
item alignment mask |
vm.uma.pf_mtags.keg.domain |
|
vm.uma.pf_mtags.keg.domain.[num] |
|
vm.uma.pf_mtags.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_mtags.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_mtags.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_mtags.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_mtags.keg.ipers |
items available per-slab |
vm.uma.pf_mtags.keg.name |
Keg name |
vm.uma.pf_mtags.keg.ppera |
pages per-slab allocation |
vm.uma.pf_mtags.keg.reserve |
number of reserved items |
vm.uma.pf_mtags.keg.rsize |
Real object size with alignment |
vm.uma.pf_mtags.limit |
|
vm.uma.pf_mtags.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_mtags.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_mtags.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_mtags.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_mtags.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_mtags.size |
Allocation size |
vm.uma.pf_mtags.stats |
|
vm.uma.pf_mtags.stats.allocs |
Total allocation calls |
vm.uma.pf_mtags.stats.current |
Current number of allocated items |
vm.uma.pf_mtags.stats.fails |
Number of allocation failures |
vm.uma.pf_mtags.stats.frees |
Total free calls |
vm.uma.pf_mtags.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_source_nodes |
|
vm.uma.pf_source_nodes.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_source_nodes.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_source_nodes.domain |
|
vm.uma.pf_source_nodes.domain.[num] |
|
vm.uma.pf_source_nodes.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_source_nodes.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_source_nodes.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_source_nodes.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_source_nodes.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_source_nodes.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_source_nodes.domain.[num].wss |
Working set size |
vm.uma.pf_source_nodes.flags |
Allocator configuration flags |
vm.uma.pf_source_nodes.keg |
|
vm.uma.pf_source_nodes.keg.align |
item alignment mask |
vm.uma.pf_source_nodes.keg.domain |
|
vm.uma.pf_source_nodes.keg.domain.[num] |
|
vm.uma.pf_source_nodes.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_source_nodes.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_source_nodes.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_source_nodes.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_source_nodes.keg.ipers |
items available per-slab |
vm.uma.pf_source_nodes.keg.name |
Keg name |
vm.uma.pf_source_nodes.keg.ppera |
pages per-slab allocation |
vm.uma.pf_source_nodes.keg.reserve |
number of reserved items |
vm.uma.pf_source_nodes.keg.rsize |
Real object size with alignment |
vm.uma.pf_source_nodes.limit |
|
vm.uma.pf_source_nodes.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_source_nodes.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_source_nodes.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_source_nodes.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_source_nodes.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_source_nodes.size |
Allocation size |
vm.uma.pf_source_nodes.stats |
|
vm.uma.pf_source_nodes.stats.allocs |
Total allocation calls |
vm.uma.pf_source_nodes.stats.current |
Current number of allocated items |
vm.uma.pf_source_nodes.stats.fails |
Number of allocation failures |
vm.uma.pf_source_nodes.stats.frees |
Total free calls |
vm.uma.pf_source_nodes.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_source_nodes_[num] |
|
vm.uma.pf_source_nodes_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_source_nodes_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_source_nodes_[num].domain |
|
vm.uma.pf_source_nodes_[num].domain.[num] |
|
vm.uma.pf_source_nodes_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_source_nodes_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_source_nodes_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_source_nodes_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_source_nodes_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_source_nodes_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_source_nodes_[num].domain.[num].wss |
Working set size |
vm.uma.pf_source_nodes_[num].flags |
Allocator configuration flags |
vm.uma.pf_source_nodes_[num].keg |
|
vm.uma.pf_source_nodes_[num].keg.align |
item alignment mask |
vm.uma.pf_source_nodes_[num].keg.domain |
|
vm.uma.pf_source_nodes_[num].keg.domain.[num] |
|
vm.uma.pf_source_nodes_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_source_nodes_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_source_nodes_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_source_nodes_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_source_nodes_[num].keg.ipers |
items available per-slab |
vm.uma.pf_source_nodes_[num].keg.name |
Keg name |
vm.uma.pf_source_nodes_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_source_nodes_[num].keg.reserve |
number of reserved items |
vm.uma.pf_source_nodes_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_source_nodes_[num].limit |
|
vm.uma.pf_source_nodes_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_source_nodes_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_source_nodes_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_source_nodes_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_source_nodes_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_source_nodes_[num].size |
Allocation size |
vm.uma.pf_source_nodes_[num].stats |
|
vm.uma.pf_source_nodes_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_source_nodes_[num].stats.current |
Current number of allocated items |
vm.uma.pf_source_nodes_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_source_nodes_[num].stats.frees |
Total free calls |
vm.uma.pf_source_nodes_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_state_keys |
|
vm.uma.pf_state_keys.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_state_keys.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_state_keys.domain |
|
vm.uma.pf_state_keys.domain.[num] |
|
vm.uma.pf_state_keys.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_state_keys.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_state_keys.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_state_keys.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_state_keys.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_state_keys.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_state_keys.domain.[num].wss |
Working set size |
vm.uma.pf_state_keys.flags |
Allocator configuration flags |
vm.uma.pf_state_keys.keg |
|
vm.uma.pf_state_keys.keg.align |
item alignment mask |
vm.uma.pf_state_keys.keg.domain |
|
vm.uma.pf_state_keys.keg.domain.[num] |
|
vm.uma.pf_state_keys.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_state_keys.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_state_keys.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_state_keys.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_state_keys.keg.ipers |
items available per-slab |
vm.uma.pf_state_keys.keg.name |
Keg name |
vm.uma.pf_state_keys.keg.ppera |
pages per-slab allocation |
vm.uma.pf_state_keys.keg.reserve |
number of reserved items |
vm.uma.pf_state_keys.keg.rsize |
Real object size with alignment |
vm.uma.pf_state_keys.limit |
|
vm.uma.pf_state_keys.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_state_keys.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_state_keys.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_state_keys.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_state_keys.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_state_keys.size |
Allocation size |
vm.uma.pf_state_keys.stats |
|
vm.uma.pf_state_keys.stats.allocs |
Total allocation calls |
vm.uma.pf_state_keys.stats.current |
Current number of allocated items |
vm.uma.pf_state_keys.stats.fails |
Number of allocation failures |
vm.uma.pf_state_keys.stats.frees |
Total free calls |
vm.uma.pf_state_keys.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_state_keys_[num] |
|
vm.uma.pf_state_keys_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_state_keys_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_state_keys_[num].domain |
|
vm.uma.pf_state_keys_[num].domain.[num] |
|
vm.uma.pf_state_keys_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_state_keys_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_state_keys_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_state_keys_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_state_keys_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_state_keys_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_state_keys_[num].domain.[num].wss |
Working set size |
vm.uma.pf_state_keys_[num].flags |
Allocator configuration flags |
vm.uma.pf_state_keys_[num].keg |
|
vm.uma.pf_state_keys_[num].keg.align |
item alignment mask |
vm.uma.pf_state_keys_[num].keg.domain |
|
vm.uma.pf_state_keys_[num].keg.domain.[num] |
|
vm.uma.pf_state_keys_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_state_keys_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_state_keys_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_state_keys_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_state_keys_[num].keg.ipers |
items available per-slab |
vm.uma.pf_state_keys_[num].keg.name |
Keg name |
vm.uma.pf_state_keys_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_state_keys_[num].keg.reserve |
number of reserved items |
vm.uma.pf_state_keys_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_state_keys_[num].limit |
|
vm.uma.pf_state_keys_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_state_keys_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_state_keys_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_state_keys_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_state_keys_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_state_keys_[num].size |
Allocation size |
vm.uma.pf_state_keys_[num].stats |
|
vm.uma.pf_state_keys_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_state_keys_[num].stats.current |
Current number of allocated items |
vm.uma.pf_state_keys_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_state_keys_[num].stats.frees |
Total free calls |
vm.uma.pf_state_keys_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_state_scrubs |
|
vm.uma.pf_state_scrubs.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_state_scrubs.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_state_scrubs.domain |
|
vm.uma.pf_state_scrubs.domain.[num] |
|
vm.uma.pf_state_scrubs.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_state_scrubs.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_state_scrubs.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_state_scrubs.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_state_scrubs.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_state_scrubs.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_state_scrubs.domain.[num].wss |
Working set size |
vm.uma.pf_state_scrubs.flags |
Allocator configuration flags |
vm.uma.pf_state_scrubs.keg |
|
vm.uma.pf_state_scrubs.keg.align |
item alignment mask |
vm.uma.pf_state_scrubs.keg.domain |
|
vm.uma.pf_state_scrubs.keg.domain.[num] |
|
vm.uma.pf_state_scrubs.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_state_scrubs.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_state_scrubs.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_state_scrubs.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_state_scrubs.keg.ipers |
items available per-slab |
vm.uma.pf_state_scrubs.keg.name |
Keg name |
vm.uma.pf_state_scrubs.keg.ppera |
pages per-slab allocation |
vm.uma.pf_state_scrubs.keg.reserve |
number of reserved items |
vm.uma.pf_state_scrubs.keg.rsize |
Real object size with alignment |
vm.uma.pf_state_scrubs.limit |
|
vm.uma.pf_state_scrubs.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_state_scrubs.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_state_scrubs.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_state_scrubs.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_state_scrubs.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_state_scrubs.size |
Allocation size |
vm.uma.pf_state_scrubs.stats |
|
vm.uma.pf_state_scrubs.stats.allocs |
Total allocation calls |
vm.uma.pf_state_scrubs.stats.current |
Current number of allocated items |
vm.uma.pf_state_scrubs.stats.fails |
Number of allocation failures |
vm.uma.pf_state_scrubs.stats.frees |
Total free calls |
vm.uma.pf_state_scrubs.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_state_scrubs_[num] |
|
vm.uma.pf_state_scrubs_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_state_scrubs_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_state_scrubs_[num].domain |
|
vm.uma.pf_state_scrubs_[num].domain.[num] |
|
vm.uma.pf_state_scrubs_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_state_scrubs_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_state_scrubs_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_state_scrubs_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_state_scrubs_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_state_scrubs_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_state_scrubs_[num].domain.[num].wss |
Working set size |
vm.uma.pf_state_scrubs_[num].flags |
Allocator configuration flags |
vm.uma.pf_state_scrubs_[num].keg |
|
vm.uma.pf_state_scrubs_[num].keg.align |
item alignment mask |
vm.uma.pf_state_scrubs_[num].keg.domain |
|
vm.uma.pf_state_scrubs_[num].keg.domain.[num] |
|
vm.uma.pf_state_scrubs_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_state_scrubs_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_state_scrubs_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_state_scrubs_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_state_scrubs_[num].keg.ipers |
items available per-slab |
vm.uma.pf_state_scrubs_[num].keg.name |
Keg name |
vm.uma.pf_state_scrubs_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_state_scrubs_[num].keg.reserve |
number of reserved items |
vm.uma.pf_state_scrubs_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_state_scrubs_[num].limit |
|
vm.uma.pf_state_scrubs_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_state_scrubs_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_state_scrubs_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_state_scrubs_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_state_scrubs_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_state_scrubs_[num].size |
Allocation size |
vm.uma.pf_state_scrubs_[num].stats |
|
vm.uma.pf_state_scrubs_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_state_scrubs_[num].stats.current |
Current number of allocated items |
vm.uma.pf_state_scrubs_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_state_scrubs_[num].stats.frees |
Total free calls |
vm.uma.pf_state_scrubs_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_states |
|
vm.uma.pf_states.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_states.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_states.domain |
|
vm.uma.pf_states.domain.[num] |
|
vm.uma.pf_states.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_states.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_states.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_states.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_states.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_states.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_states.domain.[num].wss |
Working set size |
vm.uma.pf_states.flags |
Allocator configuration flags |
vm.uma.pf_states.keg |
|
vm.uma.pf_states.keg.align |
item alignment mask |
vm.uma.pf_states.keg.domain |
|
vm.uma.pf_states.keg.domain.[num] |
|
vm.uma.pf_states.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_states.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_states.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_states.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_states.keg.ipers |
items available per-slab |
vm.uma.pf_states.keg.name |
Keg name |
vm.uma.pf_states.keg.ppera |
pages per-slab allocation |
vm.uma.pf_states.keg.reserve |
number of reserved items |
vm.uma.pf_states.keg.rsize |
Real object size with alignment |
vm.uma.pf_states.limit |
|
vm.uma.pf_states.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_states.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_states.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_states.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_states.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_states.size |
Allocation size |
vm.uma.pf_states.stats |
|
vm.uma.pf_states.stats.allocs |
Total allocation calls |
vm.uma.pf_states.stats.current |
Current number of allocated items |
vm.uma.pf_states.stats.fails |
Number of allocation failures |
vm.uma.pf_states.stats.frees |
Total free calls |
vm.uma.pf_states.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_states_[num] |
|
vm.uma.pf_states_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_states_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_states_[num].domain |
|
vm.uma.pf_states_[num].domain.[num] |
|
vm.uma.pf_states_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_states_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_states_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_states_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_states_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_states_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_states_[num].domain.[num].wss |
Working set size |
vm.uma.pf_states_[num].flags |
Allocator configuration flags |
vm.uma.pf_states_[num].keg |
|
vm.uma.pf_states_[num].keg.align |
item alignment mask |
vm.uma.pf_states_[num].keg.domain |
|
vm.uma.pf_states_[num].keg.domain.[num] |
|
vm.uma.pf_states_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_states_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_states_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_states_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_states_[num].keg.ipers |
items available per-slab |
vm.uma.pf_states_[num].keg.name |
Keg name |
vm.uma.pf_states_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_states_[num].keg.reserve |
number of reserved items |
vm.uma.pf_states_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_states_[num].limit |
|
vm.uma.pf_states_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_states_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_states_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_states_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_states_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_states_[num].size |
Allocation size |
vm.uma.pf_states_[num].stats |
|
vm.uma.pf_states_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_states_[num].stats.current |
Current number of allocated items |
vm.uma.pf_states_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_states_[num].stats.frees |
Total free calls |
vm.uma.pf_states_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_table_entries |
|
vm.uma.pf_table_entries.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_table_entries.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_table_entries.domain |
|
vm.uma.pf_table_entries.domain.[num] |
|
vm.uma.pf_table_entries.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_table_entries.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_table_entries.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_table_entries.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_table_entries.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_table_entries.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_table_entries.domain.[num].wss |
Working set size |
vm.uma.pf_table_entries.flags |
Allocator configuration flags |
vm.uma.pf_table_entries.keg |
|
vm.uma.pf_table_entries.keg.align |
item alignment mask |
vm.uma.pf_table_entries.keg.domain |
|
vm.uma.pf_table_entries.keg.domain.[num] |
|
vm.uma.pf_table_entries.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_table_entries.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_table_entries.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_table_entries.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_table_entries.keg.ipers |
items available per-slab |
vm.uma.pf_table_entries.keg.name |
Keg name |
vm.uma.pf_table_entries.keg.ppera |
pages per-slab allocation |
vm.uma.pf_table_entries.keg.reserve |
number of reserved items |
vm.uma.pf_table_entries.keg.rsize |
Real object size with alignment |
vm.uma.pf_table_entries.limit |
|
vm.uma.pf_table_entries.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_table_entries.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_table_entries.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_table_entries.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_table_entries.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_table_entries.size |
Allocation size |
vm.uma.pf_table_entries.stats |
|
vm.uma.pf_table_entries.stats.allocs |
Total allocation calls |
vm.uma.pf_table_entries.stats.current |
Current number of allocated items |
vm.uma.pf_table_entries.stats.fails |
Number of allocation failures |
vm.uma.pf_table_entries.stats.frees |
Total free calls |
vm.uma.pf_table_entries.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_table_entries_[num] |
|
vm.uma.pf_table_entries_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_table_entries_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_table_entries_[num].domain |
|
vm.uma.pf_table_entries_[num].domain.[num] |
|
vm.uma.pf_table_entries_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_table_entries_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_table_entries_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_table_entries_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_table_entries_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_table_entries_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_table_entries_[num].domain.[num].wss |
Working set size |
vm.uma.pf_table_entries_[num].flags |
Allocator configuration flags |
vm.uma.pf_table_entries_[num].keg |
|
vm.uma.pf_table_entries_[num].keg.align |
item alignment mask |
vm.uma.pf_table_entries_[num].keg.domain |
|
vm.uma.pf_table_entries_[num].keg.domain.[num] |
|
vm.uma.pf_table_entries_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_table_entries_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_table_entries_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_table_entries_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_table_entries_[num].keg.ipers |
items available per-slab |
vm.uma.pf_table_entries_[num].keg.name |
Keg name |
vm.uma.pf_table_entries_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_table_entries_[num].keg.reserve |
number of reserved items |
vm.uma.pf_table_entries_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_table_entries_[num].limit |
|
vm.uma.pf_table_entries_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_table_entries_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_table_entries_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_table_entries_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_table_entries_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_table_entries_[num].size |
Allocation size |
vm.uma.pf_table_entries_[num].stats |
|
vm.uma.pf_table_entries_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_table_entries_[num].stats.current |
Current number of allocated items |
vm.uma.pf_table_entries_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_table_entries_[num].stats.frees |
Total free calls |
vm.uma.pf_table_entries_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_table_entry_counters |
|
vm.uma.pf_table_entry_counters.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_table_entry_counters.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_table_entry_counters.domain |
|
vm.uma.pf_table_entry_counters.domain.[num] |
|
vm.uma.pf_table_entry_counters.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_table_entry_counters.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_table_entry_counters.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_table_entry_counters.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_table_entry_counters.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_table_entry_counters.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_table_entry_counters.domain.[num].wss |
Working set size |
vm.uma.pf_table_entry_counters.flags |
Allocator configuration flags |
vm.uma.pf_table_entry_counters.keg |
|
vm.uma.pf_table_entry_counters.keg.align |
item alignment mask |
vm.uma.pf_table_entry_counters.keg.domain |
|
vm.uma.pf_table_entry_counters.keg.domain.[num] |
|
vm.uma.pf_table_entry_counters.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_table_entry_counters.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_table_entry_counters.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_table_entry_counters.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_table_entry_counters.keg.ipers |
items available per-slab |
vm.uma.pf_table_entry_counters.keg.name |
Keg name |
vm.uma.pf_table_entry_counters.keg.ppera |
pages per-slab allocation |
vm.uma.pf_table_entry_counters.keg.reserve |
number of reserved items |
vm.uma.pf_table_entry_counters.keg.rsize |
Real object size with alignment |
vm.uma.pf_table_entry_counters.limit |
|
vm.uma.pf_table_entry_counters.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_table_entry_counters.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_table_entry_counters.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_table_entry_counters.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_table_entry_counters.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_table_entry_counters.size |
Allocation size |
vm.uma.pf_table_entry_counters.stats |
|
vm.uma.pf_table_entry_counters.stats.allocs |
Total allocation calls |
vm.uma.pf_table_entry_counters.stats.current |
Current number of allocated items |
vm.uma.pf_table_entry_counters.stats.fails |
Number of allocation failures |
vm.uma.pf_table_entry_counters.stats.frees |
Total free calls |
vm.uma.pf_table_entry_counters.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_table_entry_counters_[num] |
|
vm.uma.pf_table_entry_counters_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_table_entry_counters_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_table_entry_counters_[num].domain |
|
vm.uma.pf_table_entry_counters_[num].domain.[num] |
|
vm.uma.pf_table_entry_counters_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_table_entry_counters_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_table_entry_counters_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_table_entry_counters_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_table_entry_counters_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_table_entry_counters_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_table_entry_counters_[num].domain.[num].wss |
Working set size |
vm.uma.pf_table_entry_counters_[num].flags |
Allocator configuration flags |
vm.uma.pf_table_entry_counters_[num].keg |
|
vm.uma.pf_table_entry_counters_[num].keg.align |
item alignment mask |
vm.uma.pf_table_entry_counters_[num].keg.domain |
|
vm.uma.pf_table_entry_counters_[num].keg.domain.[num] |
|
vm.uma.pf_table_entry_counters_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_table_entry_counters_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_table_entry_counters_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_table_entry_counters_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_table_entry_counters_[num].keg.ipers |
items available per-slab |
vm.uma.pf_table_entry_counters_[num].keg.name |
Keg name |
vm.uma.pf_table_entry_counters_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_table_entry_counters_[num].keg.reserve |
number of reserved items |
vm.uma.pf_table_entry_counters_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_table_entry_counters_[num].limit |
|
vm.uma.pf_table_entry_counters_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_table_entry_counters_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_table_entry_counters_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_table_entry_counters_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_table_entry_counters_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_table_entry_counters_[num].size |
Allocation size |
vm.uma.pf_table_entry_counters_[num].stats |
|
vm.uma.pf_table_entry_counters_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_table_entry_counters_[num].stats.current |
Current number of allocated items |
vm.uma.pf_table_entry_counters_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_table_entry_counters_[num].stats.frees |
Total free calls |
vm.uma.pf_table_entry_counters_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_tags |
|
vm.uma.pf_tags.bucket_size |
Desired per-cpu cache size |
vm.uma.pf_tags.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_tags.domain |
|
vm.uma.pf_tags.domain.[num] |
|
vm.uma.pf_tags.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_tags.domain.[num].imax |
maximum item count in this period |
vm.uma.pf_tags.domain.[num].imin |
minimum item count in this period |
vm.uma.pf_tags.domain.[num].limin |
Long time minimum item count |
vm.uma.pf_tags.domain.[num].nitems |
number of items in this domain |
vm.uma.pf_tags.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_tags.domain.[num].wss |
Working set size |
vm.uma.pf_tags.flags |
Allocator configuration flags |
vm.uma.pf_tags.keg |
|
vm.uma.pf_tags.keg.align |
item alignment mask |
vm.uma.pf_tags.keg.domain |
|
vm.uma.pf_tags.keg.domain.[num] |
|
vm.uma.pf_tags.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_tags.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_tags.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_tags.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_tags.keg.ipers |
items available per-slab |
vm.uma.pf_tags.keg.name |
Keg name |
vm.uma.pf_tags.keg.ppera |
pages per-slab allocation |
vm.uma.pf_tags.keg.reserve |
number of reserved items |
vm.uma.pf_tags.keg.rsize |
Real object size with alignment |
vm.uma.pf_tags.limit |
|
vm.uma.pf_tags.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_tags.limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_tags.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_tags.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_tags.limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_tags.size |
Allocation size |
vm.uma.pf_tags.stats |
|
vm.uma.pf_tags.stats.allocs |
Total allocation calls |
vm.uma.pf_tags.stats.current |
Current number of allocated items |
vm.uma.pf_tags.stats.fails |
Number of allocation failures |
vm.uma.pf_tags.stats.frees |
Total free calls |
vm.uma.pf_tags.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pf_tags_[num] |
|
vm.uma.pf_tags_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.pf_tags_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pf_tags_[num].domain |
|
vm.uma.pf_tags_[num].domain.[num] |
|
vm.uma.pf_tags_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pf_tags_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.pf_tags_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.pf_tags_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.pf_tags_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.pf_tags_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pf_tags_[num].domain.[num].wss |
Working set size |
vm.uma.pf_tags_[num].flags |
Allocator configuration flags |
vm.uma.pf_tags_[num].keg |
|
vm.uma.pf_tags_[num].keg.align |
item alignment mask |
vm.uma.pf_tags_[num].keg.domain |
|
vm.uma.pf_tags_[num].keg.domain.[num] |
|
vm.uma.pf_tags_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pf_tags_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pf_tags_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pf_tags_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pf_tags_[num].keg.ipers |
items available per-slab |
vm.uma.pf_tags_[num].keg.name |
Keg name |
vm.uma.pf_tags_[num].keg.ppera |
pages per-slab allocation |
vm.uma.pf_tags_[num].keg.reserve |
number of reserved items |
vm.uma.pf_tags_[num].keg.rsize |
Real object size with alignment |
vm.uma.pf_tags_[num].limit |
|
vm.uma.pf_tags_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pf_tags_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.pf_tags_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pf_tags_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pf_tags_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.pf_tags_[num].size |
Allocation size |
vm.uma.pf_tags_[num].stats |
|
vm.uma.pf_tags_[num].stats.allocs |
Total allocation calls |
vm.uma.pf_tags_[num].stats.current |
Current number of allocated items |
vm.uma.pf_tags_[num].stats.fails |
Number of allocation failures |
vm.uma.pf_tags_[num].stats.frees |
Total free calls |
vm.uma.pf_tags_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.pipe |
|
vm.uma.pipe.bucket_size |
Desired per-cpu cache size |
vm.uma.pipe.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pipe.domain |
|
vm.uma.pipe.domain.[num] |
|
vm.uma.pipe.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pipe.domain.[num].imax |
maximum item count in this period |
vm.uma.pipe.domain.[num].imin |
minimum item count in this period |
vm.uma.pipe.domain.[num].limin |
Long time minimum item count |
vm.uma.pipe.domain.[num].nitems |
number of items in this domain |
vm.uma.pipe.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pipe.domain.[num].wss |
Working set size |
vm.uma.pipe.flags |
Allocator configuration flags |
vm.uma.pipe.keg |
|
vm.uma.pipe.keg.align |
item alignment mask |
vm.uma.pipe.keg.domain |
|
vm.uma.pipe.keg.domain.[num] |
|
vm.uma.pipe.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pipe.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pipe.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pipe.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pipe.keg.ipers |
items available per-slab |
vm.uma.pipe.keg.name |
Keg name |
vm.uma.pipe.keg.ppera |
pages per-slab allocation |
vm.uma.pipe.keg.reserve |
number of reserved items |
vm.uma.pipe.keg.rsize |
Real object size with alignment |
vm.uma.pipe.limit |
|
vm.uma.pipe.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pipe.limit.items |
Current number of allocated items if limit is set |
vm.uma.pipe.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pipe.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pipe.limit.sleeps |
Total zone limit sleeps |
vm.uma.pipe.size |
Allocation size |
vm.uma.pipe.stats |
|
vm.uma.pipe.stats.allocs |
Total allocation calls |
vm.uma.pipe.stats.current |
Current number of allocated items |
vm.uma.pipe.stats.fails |
Number of allocation failures |
vm.uma.pipe.stats.frees |
Total free calls |
vm.uma.pipe.stats.xdomain |
Free calls from the wrong domain |
vm.uma.pkru_ranges |
|
vm.uma.pkru_ranges.bucket_size |
Desired per-cpu cache size |
vm.uma.pkru_ranges.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.pkru_ranges.domain |
|
vm.uma.pkru_ranges.domain.[num] |
|
vm.uma.pkru_ranges.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.pkru_ranges.domain.[num].imax |
maximum item count in this period |
vm.uma.pkru_ranges.domain.[num].imin |
minimum item count in this period |
vm.uma.pkru_ranges.domain.[num].limin |
Long time minimum item count |
vm.uma.pkru_ranges.domain.[num].nitems |
number of items in this domain |
vm.uma.pkru_ranges.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.pkru_ranges.domain.[num].wss |
Working set size |
vm.uma.pkru_ranges.flags |
Allocator configuration flags |
vm.uma.pkru_ranges.keg |
|
vm.uma.pkru_ranges.keg.align |
item alignment mask |
vm.uma.pkru_ranges.keg.domain |
|
vm.uma.pkru_ranges.keg.domain.[num] |
|
vm.uma.pkru_ranges.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.pkru_ranges.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.pkru_ranges.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.pkru_ranges.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.pkru_ranges.keg.ipers |
items available per-slab |
vm.uma.pkru_ranges.keg.name |
Keg name |
vm.uma.pkru_ranges.keg.ppera |
pages per-slab allocation |
vm.uma.pkru_ranges.keg.reserve |
number of reserved items |
vm.uma.pkru_ranges.keg.rsize |
Real object size with alignment |
vm.uma.pkru_ranges.limit |
|
vm.uma.pkru_ranges.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.pkru_ranges.limit.items |
Current number of allocated items if limit is set |
vm.uma.pkru_ranges.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.pkru_ranges.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.pkru_ranges.limit.sleeps |
Total zone limit sleeps |
vm.uma.pkru_ranges.size |
Allocation size |
vm.uma.pkru_ranges.stats |
|
vm.uma.pkru_ranges.stats.allocs |
Total allocation calls |
vm.uma.pkru_ranges.stats.current |
Current number of allocated items |
vm.uma.pkru_ranges.stats.fails |
Number of allocation failures |
vm.uma.pkru_ranges.stats.frees |
Total free calls |
vm.uma.pkru_ranges.stats.xdomain |
Free calls from the wrong domain |
vm.uma.rangeset_pctrie_nodes |
|
vm.uma.rangeset_pctrie_nodes.bucket_size |
Desired per-cpu cache size |
vm.uma.rangeset_pctrie_nodes.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.rangeset_pctrie_nodes.domain |
|
vm.uma.rangeset_pctrie_nodes.domain.[num] |
|
vm.uma.rangeset_pctrie_nodes.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.rangeset_pctrie_nodes.domain.[num].imax |
maximum item count in this period |
vm.uma.rangeset_pctrie_nodes.domain.[num].imin |
minimum item count in this period |
vm.uma.rangeset_pctrie_nodes.domain.[num].limin |
Long time minimum item count |
vm.uma.rangeset_pctrie_nodes.domain.[num].nitems |
number of items in this domain |
vm.uma.rangeset_pctrie_nodes.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.rangeset_pctrie_nodes.domain.[num].wss |
Working set size |
vm.uma.rangeset_pctrie_nodes.flags |
Allocator configuration flags |
vm.uma.rangeset_pctrie_nodes.keg |
|
vm.uma.rangeset_pctrie_nodes.keg.align |
item alignment mask |
vm.uma.rangeset_pctrie_nodes.keg.domain |
|
vm.uma.rangeset_pctrie_nodes.keg.domain.[num] |
|
vm.uma.rangeset_pctrie_nodes.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.rangeset_pctrie_nodes.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.rangeset_pctrie_nodes.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.rangeset_pctrie_nodes.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.rangeset_pctrie_nodes.keg.ipers |
items available per-slab |
vm.uma.rangeset_pctrie_nodes.keg.name |
Keg name |
vm.uma.rangeset_pctrie_nodes.keg.ppera |
pages per-slab allocation |
vm.uma.rangeset_pctrie_nodes.keg.reserve |
number of reserved items |
vm.uma.rangeset_pctrie_nodes.keg.rsize |
Real object size with alignment |
vm.uma.rangeset_pctrie_nodes.limit |
|
vm.uma.rangeset_pctrie_nodes.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.rangeset_pctrie_nodes.limit.items |
Current number of allocated items if limit is set |
vm.uma.rangeset_pctrie_nodes.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.rangeset_pctrie_nodes.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.rangeset_pctrie_nodes.limit.sleeps |
Total zone limit sleeps |
vm.uma.rangeset_pctrie_nodes.size |
Allocation size |
vm.uma.rangeset_pctrie_nodes.stats |
|
vm.uma.rangeset_pctrie_nodes.stats.allocs |
Total allocation calls |
vm.uma.rangeset_pctrie_nodes.stats.current |
Current number of allocated items |
vm.uma.rangeset_pctrie_nodes.stats.fails |
Number of allocation failures |
vm.uma.rangeset_pctrie_nodes.stats.frees |
Total free calls |
vm.uma.rangeset_pctrie_nodes.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ripcb |
|
vm.uma.ripcb.bucket_size |
Desired per-cpu cache size |
vm.uma.ripcb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ripcb.domain |
|
vm.uma.ripcb.domain.[num] |
|
vm.uma.ripcb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ripcb.domain.[num].imax |
maximum item count in this period |
vm.uma.ripcb.domain.[num].imin |
minimum item count in this period |
vm.uma.ripcb.domain.[num].limin |
Long time minimum item count |
vm.uma.ripcb.domain.[num].nitems |
number of items in this domain |
vm.uma.ripcb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ripcb.domain.[num].wss |
Working set size |
vm.uma.ripcb.flags |
Allocator configuration flags |
vm.uma.ripcb.keg |
|
vm.uma.ripcb.keg.align |
item alignment mask |
vm.uma.ripcb.keg.domain |
|
vm.uma.ripcb.keg.domain.[num] |
|
vm.uma.ripcb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ripcb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ripcb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ripcb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ripcb.keg.ipers |
items available per-slab |
vm.uma.ripcb.keg.name |
Keg name |
vm.uma.ripcb.keg.ppera |
pages per-slab allocation |
vm.uma.ripcb.keg.reserve |
number of reserved items |
vm.uma.ripcb.keg.rsize |
Real object size with alignment |
vm.uma.ripcb.limit |
|
vm.uma.ripcb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ripcb.limit.items |
Current number of allocated items if limit is set |
vm.uma.ripcb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ripcb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ripcb.limit.sleeps |
Total zone limit sleeps |
vm.uma.ripcb.size |
Allocation size |
vm.uma.ripcb.stats |
|
vm.uma.ripcb.stats.allocs |
Total allocation calls |
vm.uma.ripcb.stats.current |
Current number of allocated items |
vm.uma.ripcb.stats.fails |
Number of allocation failures |
vm.uma.ripcb.stats.frees |
Total free calls |
vm.uma.ripcb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ripcb_ports |
|
vm.uma.ripcb_ports.bucket_size |
Desired per-cpu cache size |
vm.uma.ripcb_ports.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ripcb_ports.domain |
|
vm.uma.ripcb_ports.domain.[num] |
|
vm.uma.ripcb_ports.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ripcb_ports.domain.[num].imax |
maximum item count in this period |
vm.uma.ripcb_ports.domain.[num].imin |
minimum item count in this period |
vm.uma.ripcb_ports.domain.[num].limin |
Long time minimum item count |
vm.uma.ripcb_ports.domain.[num].nitems |
number of items in this domain |
vm.uma.ripcb_ports.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ripcb_ports.domain.[num].wss |
Working set size |
vm.uma.ripcb_ports.flags |
Allocator configuration flags |
vm.uma.ripcb_ports.keg |
|
vm.uma.ripcb_ports.keg.align |
item alignment mask |
vm.uma.ripcb_ports.keg.domain |
|
vm.uma.ripcb_ports.keg.domain.[num] |
|
vm.uma.ripcb_ports.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ripcb_ports.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ripcb_ports.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ripcb_ports.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ripcb_ports.keg.ipers |
items available per-slab |
vm.uma.ripcb_ports.keg.name |
Keg name |
vm.uma.ripcb_ports.keg.ppera |
pages per-slab allocation |
vm.uma.ripcb_ports.keg.reserve |
number of reserved items |
vm.uma.ripcb_ports.keg.rsize |
Real object size with alignment |
vm.uma.ripcb_ports.limit |
|
vm.uma.ripcb_ports.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ripcb_ports.limit.items |
Current number of allocated items if limit is set |
vm.uma.ripcb_ports.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ripcb_ports.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ripcb_ports.limit.sleeps |
Total zone limit sleeps |
vm.uma.ripcb_ports.size |
Allocation size |
vm.uma.ripcb_ports.stats |
|
vm.uma.ripcb_ports.stats.allocs |
Total allocation calls |
vm.uma.ripcb_ports.stats.current |
Current number of allocated items |
vm.uma.ripcb_ports.stats.fails |
Number of allocation failures |
vm.uma.ripcb_ports.stats.frees |
Total free calls |
vm.uma.ripcb_ports.stats.xdomain |
Free calls from the wrong domain |
vm.uma.rl_entry |
|
vm.uma.rl_entry.bucket_size |
Desired per-cpu cache size |
vm.uma.rl_entry.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.rl_entry.domain |
|
vm.uma.rl_entry.domain.[num] |
|
vm.uma.rl_entry.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.rl_entry.domain.[num].imax |
maximum item count in this period |
vm.uma.rl_entry.domain.[num].imin |
minimum item count in this period |
vm.uma.rl_entry.domain.[num].limin |
Long time minimum item count |
vm.uma.rl_entry.domain.[num].nitems |
number of items in this domain |
vm.uma.rl_entry.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.rl_entry.domain.[num].wss |
Working set size |
vm.uma.rl_entry.flags |
Allocator configuration flags |
vm.uma.rl_entry.keg |
|
vm.uma.rl_entry.keg.align |
item alignment mask |
vm.uma.rl_entry.keg.domain |
|
vm.uma.rl_entry.keg.domain.[num] |
|
vm.uma.rl_entry.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.rl_entry.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.rl_entry.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.rl_entry.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.rl_entry.keg.ipers |
items available per-slab |
vm.uma.rl_entry.keg.name |
Keg name |
vm.uma.rl_entry.keg.ppera |
pages per-slab allocation |
vm.uma.rl_entry.keg.reserve |
number of reserved items |
vm.uma.rl_entry.keg.rsize |
Real object size with alignment |
vm.uma.rl_entry.limit |
|
vm.uma.rl_entry.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.rl_entry.limit.items |
Current number of allocated items if limit is set |
vm.uma.rl_entry.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.rl_entry.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.rl_entry.limit.sleeps |
Total zone limit sleeps |
vm.uma.rl_entry.size |
Allocation size |
vm.uma.rl_entry.stats |
|
vm.uma.rl_entry.stats.allocs |
Total allocation calls |
vm.uma.rl_entry.stats.current |
Current number of allocated items |
vm.uma.rl_entry.stats.fails |
Number of allocation failures |
vm.uma.rl_entry.stats.frees |
Total free calls |
vm.uma.rl_entry.stats.xdomain |
Free calls from the wrong domain |
vm.uma.routing_nhops |
|
vm.uma.routing_nhops.bucket_size |
Desired per-cpu cache size |
vm.uma.routing_nhops.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.routing_nhops.domain |
|
vm.uma.routing_nhops.domain.[num] |
|
vm.uma.routing_nhops.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.routing_nhops.domain.[num].imax |
maximum item count in this period |
vm.uma.routing_nhops.domain.[num].imin |
minimum item count in this period |
vm.uma.routing_nhops.domain.[num].limin |
Long time minimum item count |
vm.uma.routing_nhops.domain.[num].nitems |
number of items in this domain |
vm.uma.routing_nhops.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.routing_nhops.domain.[num].wss |
Working set size |
vm.uma.routing_nhops.flags |
Allocator configuration flags |
vm.uma.routing_nhops.keg |
|
vm.uma.routing_nhops.keg.align |
item alignment mask |
vm.uma.routing_nhops.keg.domain |
|
vm.uma.routing_nhops.keg.domain.[num] |
|
vm.uma.routing_nhops.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.routing_nhops.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.routing_nhops.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.routing_nhops.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.routing_nhops.keg.ipers |
items available per-slab |
vm.uma.routing_nhops.keg.name |
Keg name |
vm.uma.routing_nhops.keg.ppera |
pages per-slab allocation |
vm.uma.routing_nhops.keg.reserve |
number of reserved items |
vm.uma.routing_nhops.keg.rsize |
Real object size with alignment |
vm.uma.routing_nhops.limit |
|
vm.uma.routing_nhops.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.routing_nhops.limit.items |
Current number of allocated items if limit is set |
vm.uma.routing_nhops.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.routing_nhops.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.routing_nhops.limit.sleeps |
Total zone limit sleeps |
vm.uma.routing_nhops.size |
Allocation size |
vm.uma.routing_nhops.stats |
|
vm.uma.routing_nhops.stats.allocs |
Total allocation calls |
vm.uma.routing_nhops.stats.current |
Current number of allocated items |
vm.uma.routing_nhops.stats.fails |
Number of allocation failures |
vm.uma.routing_nhops.stats.frees |
Total free calls |
vm.uma.routing_nhops.stats.xdomain |
Free calls from the wrong domain |
vm.uma.rtentry |
|
vm.uma.rtentry.bucket_size |
Desired per-cpu cache size |
vm.uma.rtentry.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.rtentry.domain |
|
vm.uma.rtentry.domain.[num] |
|
vm.uma.rtentry.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.rtentry.domain.[num].imax |
maximum item count in this period |
vm.uma.rtentry.domain.[num].imin |
minimum item count in this period |
vm.uma.rtentry.domain.[num].limin |
Long time minimum item count |
vm.uma.rtentry.domain.[num].nitems |
number of items in this domain |
vm.uma.rtentry.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.rtentry.domain.[num].wss |
Working set size |
vm.uma.rtentry.flags |
Allocator configuration flags |
vm.uma.rtentry.keg |
|
vm.uma.rtentry.keg.align |
item alignment mask |
vm.uma.rtentry.keg.domain |
|
vm.uma.rtentry.keg.domain.[num] |
|
vm.uma.rtentry.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.rtentry.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.rtentry.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.rtentry.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.rtentry.keg.ipers |
items available per-slab |
vm.uma.rtentry.keg.name |
Keg name |
vm.uma.rtentry.keg.ppera |
pages per-slab allocation |
vm.uma.rtentry.keg.reserve |
number of reserved items |
vm.uma.rtentry.keg.rsize |
Real object size with alignment |
vm.uma.rtentry.limit |
|
vm.uma.rtentry.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.rtentry.limit.items |
Current number of allocated items if limit is set |
vm.uma.rtentry.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.rtentry.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.rtentry.limit.sleeps |
Total zone limit sleeps |
vm.uma.rtentry.size |
Allocation size |
vm.uma.rtentry.stats |
|
vm.uma.rtentry.stats.allocs |
Total allocation calls |
vm.uma.rtentry.stats.current |
Current number of allocated items |
vm.uma.rtentry.stats.fails |
Number of allocation failures |
vm.uma.rtentry.stats.frees |
Total free calls |
vm.uma.rtentry.stats.xdomain |
Free calls from the wrong domain |
vm.uma.rtentry_[num] |
|
vm.uma.rtentry_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.rtentry_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.rtentry_[num].domain |
|
vm.uma.rtentry_[num].domain.[num] |
|
vm.uma.rtentry_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.rtentry_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.rtentry_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.rtentry_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.rtentry_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.rtentry_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.rtentry_[num].domain.[num].wss |
Working set size |
vm.uma.rtentry_[num].flags |
Allocator configuration flags |
vm.uma.rtentry_[num].keg |
|
vm.uma.rtentry_[num].keg.align |
item alignment mask |
vm.uma.rtentry_[num].keg.domain |
|
vm.uma.rtentry_[num].keg.domain.[num] |
|
vm.uma.rtentry_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.rtentry_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.rtentry_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.rtentry_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.rtentry_[num].keg.ipers |
items available per-slab |
vm.uma.rtentry_[num].keg.name |
Keg name |
vm.uma.rtentry_[num].keg.ppera |
pages per-slab allocation |
vm.uma.rtentry_[num].keg.reserve |
number of reserved items |
vm.uma.rtentry_[num].keg.rsize |
Real object size with alignment |
vm.uma.rtentry_[num].limit |
|
vm.uma.rtentry_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.rtentry_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.rtentry_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.rtentry_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.rtentry_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.rtentry_[num].size |
Allocation size |
vm.uma.rtentry_[num].stats |
|
vm.uma.rtentry_[num].stats.allocs |
Total allocation calls |
vm.uma.rtentry_[num].stats.current |
Current number of allocated items |
vm.uma.rtentry_[num].stats.fails |
Number of allocation failures |
vm.uma.rtentry_[num].stats.frees |
Total free calls |
vm.uma.rtentry_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.sa_cache |
|
vm.uma.sa_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.sa_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.sa_cache.domain |
|
vm.uma.sa_cache.domain.[num] |
|
vm.uma.sa_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.sa_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.sa_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.sa_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.sa_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.sa_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.sa_cache.domain.[num].wss |
Working set size |
vm.uma.sa_cache.flags |
Allocator configuration flags |
vm.uma.sa_cache.keg |
|
vm.uma.sa_cache.keg.align |
item alignment mask |
vm.uma.sa_cache.keg.domain |
|
vm.uma.sa_cache.keg.domain.[num] |
|
vm.uma.sa_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.sa_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.sa_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.sa_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.sa_cache.keg.ipers |
items available per-slab |
vm.uma.sa_cache.keg.name |
Keg name |
vm.uma.sa_cache.keg.ppera |
pages per-slab allocation |
vm.uma.sa_cache.keg.reserve |
number of reserved items |
vm.uma.sa_cache.keg.rsize |
Real object size with alignment |
vm.uma.sa_cache.limit |
|
vm.uma.sa_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.sa_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.sa_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.sa_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.sa_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.sa_cache.size |
Allocation size |
vm.uma.sa_cache.stats |
|
vm.uma.sa_cache.stats.allocs |
Total allocation calls |
vm.uma.sa_cache.stats.current |
Current number of allocated items |
vm.uma.sa_cache.stats.fails |
Number of allocation failures |
vm.uma.sa_cache.stats.frees |
Total free calls |
vm.uma.sa_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.sackhole |
|
vm.uma.sackhole.bucket_size |
Desired per-cpu cache size |
vm.uma.sackhole.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.sackhole.domain |
|
vm.uma.sackhole.domain.[num] |
|
vm.uma.sackhole.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.sackhole.domain.[num].imax |
maximum item count in this period |
vm.uma.sackhole.domain.[num].imin |
minimum item count in this period |
vm.uma.sackhole.domain.[num].limin |
Long time minimum item count |
vm.uma.sackhole.domain.[num].nitems |
number of items in this domain |
vm.uma.sackhole.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.sackhole.domain.[num].wss |
Working set size |
vm.uma.sackhole.flags |
Allocator configuration flags |
vm.uma.sackhole.keg |
|
vm.uma.sackhole.keg.align |
item alignment mask |
vm.uma.sackhole.keg.domain |
|
vm.uma.sackhole.keg.domain.[num] |
|
vm.uma.sackhole.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.sackhole.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.sackhole.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.sackhole.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.sackhole.keg.ipers |
items available per-slab |
vm.uma.sackhole.keg.name |
Keg name |
vm.uma.sackhole.keg.ppera |
pages per-slab allocation |
vm.uma.sackhole.keg.reserve |
number of reserved items |
vm.uma.sackhole.keg.rsize |
Real object size with alignment |
vm.uma.sackhole.limit |
|
vm.uma.sackhole.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.sackhole.limit.items |
Current number of allocated items if limit is set |
vm.uma.sackhole.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.sackhole.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.sackhole.limit.sleeps |
Total zone limit sleeps |
vm.uma.sackhole.size |
Allocation size |
vm.uma.sackhole.stats |
|
vm.uma.sackhole.stats.allocs |
Total allocation calls |
vm.uma.sackhole.stats.current |
Current number of allocated items |
vm.uma.sackhole.stats.fails |
Number of allocation failures |
vm.uma.sackhole.stats.frees |
Total free calls |
vm.uma.sackhole.stats.xdomain |
Free calls from the wrong domain |
vm.uma.sackhole_[num] |
|
vm.uma.sackhole_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.sackhole_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.sackhole_[num].domain |
|
vm.uma.sackhole_[num].domain.[num] |
|
vm.uma.sackhole_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.sackhole_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.sackhole_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.sackhole_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.sackhole_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.sackhole_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.sackhole_[num].domain.[num].wss |
Working set size |
vm.uma.sackhole_[num].flags |
Allocator configuration flags |
vm.uma.sackhole_[num].keg |
|
vm.uma.sackhole_[num].keg.align |
item alignment mask |
vm.uma.sackhole_[num].keg.domain |
|
vm.uma.sackhole_[num].keg.domain.[num] |
|
vm.uma.sackhole_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.sackhole_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.sackhole_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.sackhole_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.sackhole_[num].keg.ipers |
items available per-slab |
vm.uma.sackhole_[num].keg.name |
Keg name |
vm.uma.sackhole_[num].keg.ppera |
pages per-slab allocation |
vm.uma.sackhole_[num].keg.reserve |
number of reserved items |
vm.uma.sackhole_[num].keg.rsize |
Real object size with alignment |
vm.uma.sackhole_[num].limit |
|
vm.uma.sackhole_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.sackhole_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.sackhole_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.sackhole_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.sackhole_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.sackhole_[num].size |
Allocation size |
vm.uma.sackhole_[num].stats |
|
vm.uma.sackhole_[num].stats.allocs |
Total allocation calls |
vm.uma.sackhole_[num].stats.current |
Current number of allocated items |
vm.uma.sackhole_[num].stats.fails |
Number of allocation failures |
vm.uma.sackhole_[num].stats.frees |
Total free calls |
vm.uma.sackhole_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.sio_cache_[num] |
|
vm.uma.sio_cache_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.sio_cache_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.sio_cache_[num].domain |
|
vm.uma.sio_cache_[num].domain.[num] |
|
vm.uma.sio_cache_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.sio_cache_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.sio_cache_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.sio_cache_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.sio_cache_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.sio_cache_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.sio_cache_[num].domain.[num].wss |
Working set size |
vm.uma.sio_cache_[num].flags |
Allocator configuration flags |
vm.uma.sio_cache_[num].keg |
|
vm.uma.sio_cache_[num].keg.align |
item alignment mask |
vm.uma.sio_cache_[num].keg.domain |
|
vm.uma.sio_cache_[num].keg.domain.[num] |
|
vm.uma.sio_cache_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.sio_cache_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.sio_cache_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.sio_cache_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.sio_cache_[num].keg.ipers |
items available per-slab |
vm.uma.sio_cache_[num].keg.name |
Keg name |
vm.uma.sio_cache_[num].keg.ppera |
pages per-slab allocation |
vm.uma.sio_cache_[num].keg.reserve |
number of reserved items |
vm.uma.sio_cache_[num].keg.rsize |
Real object size with alignment |
vm.uma.sio_cache_[num].limit |
|
vm.uma.sio_cache_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.sio_cache_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.sio_cache_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.sio_cache_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.sio_cache_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.sio_cache_[num].size |
Allocation size |
vm.uma.sio_cache_[num].stats |
|
vm.uma.sio_cache_[num].stats.allocs |
Total allocation calls |
vm.uma.sio_cache_[num].stats.current |
Current number of allocated items |
vm.uma.sio_cache_[num].stats.fails |
Number of allocation failures |
vm.uma.sio_cache_[num].stats.frees |
Total free calls |
vm.uma.sio_cache_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.skbuff |
|
vm.uma.skbuff.bucket_size |
Desired per-cpu cache size |
vm.uma.skbuff.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.skbuff.domain |
|
vm.uma.skbuff.domain.[num] |
|
vm.uma.skbuff.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.skbuff.domain.[num].imax |
maximum item count in this period |
vm.uma.skbuff.domain.[num].imin |
minimum item count in this period |
vm.uma.skbuff.domain.[num].limin |
Long time minimum item count |
vm.uma.skbuff.domain.[num].nitems |
number of items in this domain |
vm.uma.skbuff.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.skbuff.domain.[num].wss |
Working set size |
vm.uma.skbuff.flags |
Allocator configuration flags |
vm.uma.skbuff.keg |
|
vm.uma.skbuff.keg.align |
item alignment mask |
vm.uma.skbuff.keg.domain |
|
vm.uma.skbuff.keg.domain.[num] |
|
vm.uma.skbuff.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.skbuff.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.skbuff.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.skbuff.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.skbuff.keg.ipers |
items available per-slab |
vm.uma.skbuff.keg.name |
Keg name |
vm.uma.skbuff.keg.ppera |
pages per-slab allocation |
vm.uma.skbuff.keg.reserve |
number of reserved items |
vm.uma.skbuff.keg.rsize |
Real object size with alignment |
vm.uma.skbuff.limit |
|
vm.uma.skbuff.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.skbuff.limit.items |
Current number of allocated items if limit is set |
vm.uma.skbuff.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.skbuff.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.skbuff.limit.sleeps |
Total zone limit sleeps |
vm.uma.skbuff.size |
Allocation size |
vm.uma.skbuff.stats |
|
vm.uma.skbuff.stats.allocs |
Total allocation calls |
vm.uma.skbuff.stats.current |
Current number of allocated items |
vm.uma.skbuff.stats.fails |
Number of allocation failures |
vm.uma.skbuff.stats.frees |
Total free calls |
vm.uma.skbuff.stats.xdomain |
Free calls from the wrong domain |
vm.uma.socket |
|
vm.uma.socket.bucket_size |
Desired per-cpu cache size |
vm.uma.socket.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.socket.domain |
|
vm.uma.socket.domain.[num] |
|
vm.uma.socket.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.socket.domain.[num].imax |
maximum item count in this period |
vm.uma.socket.domain.[num].imin |
minimum item count in this period |
vm.uma.socket.domain.[num].limin |
Long time minimum item count |
vm.uma.socket.domain.[num].nitems |
number of items in this domain |
vm.uma.socket.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.socket.domain.[num].wss |
Working set size |
vm.uma.socket.flags |
Allocator configuration flags |
vm.uma.socket.keg |
|
vm.uma.socket.keg.align |
item alignment mask |
vm.uma.socket.keg.domain |
|
vm.uma.socket.keg.domain.[num] |
|
vm.uma.socket.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.socket.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.socket.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.socket.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.socket.keg.ipers |
items available per-slab |
vm.uma.socket.keg.name |
Keg name |
vm.uma.socket.keg.ppera |
pages per-slab allocation |
vm.uma.socket.keg.reserve |
number of reserved items |
vm.uma.socket.keg.rsize |
Real object size with alignment |
vm.uma.socket.limit |
|
vm.uma.socket.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.socket.limit.items |
Current number of allocated items if limit is set |
vm.uma.socket.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.socket.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.socket.limit.sleeps |
Total zone limit sleeps |
vm.uma.socket.size |
Allocation size |
vm.uma.socket.stats |
|
vm.uma.socket.stats.allocs |
Total allocation calls |
vm.uma.socket.stats.current |
Current number of allocated items |
vm.uma.socket.stats.fails |
Number of allocation failures |
vm.uma.socket.stats.frees |
Total free calls |
vm.uma.socket.stats.xdomain |
Free calls from the wrong domain |
vm.uma.swblk |
|
vm.uma.swblk.bucket_size |
Desired per-cpu cache size |
vm.uma.swblk.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.swblk.domain |
|
vm.uma.swblk.domain.[num] |
|
vm.uma.swblk.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.swblk.domain.[num].imax |
maximum item count in this period |
vm.uma.swblk.domain.[num].imin |
minimum item count in this period |
vm.uma.swblk.domain.[num].limin |
Long time minimum item count |
vm.uma.swblk.domain.[num].nitems |
number of items in this domain |
vm.uma.swblk.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.swblk.domain.[num].wss |
Working set size |
vm.uma.swblk.flags |
Allocator configuration flags |
vm.uma.swblk.keg |
|
vm.uma.swblk.keg.align |
item alignment mask |
vm.uma.swblk.keg.domain |
|
vm.uma.swblk.keg.domain.[num] |
|
vm.uma.swblk.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.swblk.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.swblk.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.swblk.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.swblk.keg.ipers |
items available per-slab |
vm.uma.swblk.keg.name |
Keg name |
vm.uma.swblk.keg.ppera |
pages per-slab allocation |
vm.uma.swblk.keg.reserve |
number of reserved items |
vm.uma.swblk.keg.rsize |
Real object size with alignment |
vm.uma.swblk.limit |
|
vm.uma.swblk.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.swblk.limit.items |
Current number of allocated items if limit is set |
vm.uma.swblk.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.swblk.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.swblk.limit.sleeps |
Total zone limit sleeps |
vm.uma.swblk.size |
Allocation size |
vm.uma.swblk.stats |
|
vm.uma.swblk.stats.allocs |
Total allocation calls |
vm.uma.swblk.stats.current |
Current number of allocated items |
vm.uma.swblk.stats.fails |
Number of allocation failures |
vm.uma.swblk.stats.frees |
Total free calls |
vm.uma.swblk.stats.xdomain |
Free calls from the wrong domain |
vm.uma.swpctrie |
|
vm.uma.swpctrie.bucket_size |
Desired per-cpu cache size |
vm.uma.swpctrie.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.swpctrie.domain |
|
vm.uma.swpctrie.domain.[num] |
|
vm.uma.swpctrie.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.swpctrie.domain.[num].imax |
maximum item count in this period |
vm.uma.swpctrie.domain.[num].imin |
minimum item count in this period |
vm.uma.swpctrie.domain.[num].limin |
Long time minimum item count |
vm.uma.swpctrie.domain.[num].nitems |
number of items in this domain |
vm.uma.swpctrie.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.swpctrie.domain.[num].wss |
Working set size |
vm.uma.swpctrie.flags |
Allocator configuration flags |
vm.uma.swpctrie.keg |
|
vm.uma.swpctrie.keg.align |
item alignment mask |
vm.uma.swpctrie.keg.domain |
|
vm.uma.swpctrie.keg.domain.[num] |
|
vm.uma.swpctrie.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.swpctrie.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.swpctrie.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.swpctrie.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.swpctrie.keg.ipers |
items available per-slab |
vm.uma.swpctrie.keg.name |
Keg name |
vm.uma.swpctrie.keg.ppera |
pages per-slab allocation |
vm.uma.swpctrie.keg.reserve |
number of reserved items |
vm.uma.swpctrie.keg.rsize |
Real object size with alignment |
vm.uma.swpctrie.limit |
|
vm.uma.swpctrie.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.swpctrie.limit.items |
Current number of allocated items if limit is set |
vm.uma.swpctrie.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.swpctrie.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.swpctrie.limit.sleeps |
Total zone limit sleeps |
vm.uma.swpctrie.size |
Allocation size |
vm.uma.swpctrie.stats |
|
vm.uma.swpctrie.stats.allocs |
Total allocation calls |
vm.uma.swpctrie.stats.current |
Current number of allocated items |
vm.uma.swpctrie.stats.fails |
Number of allocation failures |
vm.uma.swpctrie.stats.frees |
Total free calls |
vm.uma.swpctrie.stats.xdomain |
Free calls from the wrong domain |
vm.uma.swrbuf |
|
vm.uma.swrbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.swrbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.swrbuf.domain |
|
vm.uma.swrbuf.domain.[num] |
|
vm.uma.swrbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.swrbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.swrbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.swrbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.swrbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.swrbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.swrbuf.domain.[num].wss |
Working set size |
vm.uma.swrbuf.flags |
Allocator configuration flags |
vm.uma.swrbuf.keg |
|
vm.uma.swrbuf.keg.align |
item alignment mask |
vm.uma.swrbuf.keg.domain |
|
vm.uma.swrbuf.keg.domain.[num] |
|
vm.uma.swrbuf.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.swrbuf.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.swrbuf.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.swrbuf.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.swrbuf.keg.ipers |
items available per-slab |
vm.uma.swrbuf.keg.name |
Keg name |
vm.uma.swrbuf.keg.ppera |
pages per-slab allocation |
vm.uma.swrbuf.keg.reserve |
number of reserved items |
vm.uma.swrbuf.keg.rsize |
Real object size with alignment |
vm.uma.swrbuf.limit |
|
vm.uma.swrbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.swrbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.swrbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.swrbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.swrbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.swrbuf.size |
Allocation size |
vm.uma.swrbuf.stats |
|
vm.uma.swrbuf.stats.allocs |
Total allocation calls |
vm.uma.swrbuf.stats.current |
Current number of allocated items |
vm.uma.swrbuf.stats.fails |
Number of allocation failures |
vm.uma.swrbuf.stats.frees |
Total free calls |
vm.uma.swrbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.swwbuf |
|
vm.uma.swwbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.swwbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.swwbuf.domain |
|
vm.uma.swwbuf.domain.[num] |
|
vm.uma.swwbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.swwbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.swwbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.swwbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.swwbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.swwbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.swwbuf.domain.[num].wss |
Working set size |
vm.uma.swwbuf.flags |
Allocator configuration flags |
vm.uma.swwbuf.keg |
|
vm.uma.swwbuf.keg.align |
item alignment mask |
vm.uma.swwbuf.keg.domain |
|
vm.uma.swwbuf.keg.domain.[num] |
|
vm.uma.swwbuf.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.swwbuf.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.swwbuf.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.swwbuf.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.swwbuf.keg.ipers |
items available per-slab |
vm.uma.swwbuf.keg.name |
Keg name |
vm.uma.swwbuf.keg.ppera |
pages per-slab allocation |
vm.uma.swwbuf.keg.reserve |
number of reserved items |
vm.uma.swwbuf.keg.rsize |
Real object size with alignment |
vm.uma.swwbuf.limit |
|
vm.uma.swwbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.swwbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.swwbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.swwbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.swwbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.swwbuf.size |
Allocation size |
vm.uma.swwbuf.stats |
|
vm.uma.swwbuf.stats.allocs |
Total allocation calls |
vm.uma.swwbuf.stats.current |
Current number of allocated items |
vm.uma.swwbuf.stats.fails |
Number of allocation failures |
vm.uma.swwbuf.stats.frees |
Total free calls |
vm.uma.swwbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.syncache |
|
vm.uma.syncache.bucket_size |
Desired per-cpu cache size |
vm.uma.syncache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.syncache.domain |
|
vm.uma.syncache.domain.[num] |
|
vm.uma.syncache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.syncache.domain.[num].imax |
maximum item count in this period |
vm.uma.syncache.domain.[num].imin |
minimum item count in this period |
vm.uma.syncache.domain.[num].limin |
Long time minimum item count |
vm.uma.syncache.domain.[num].nitems |
number of items in this domain |
vm.uma.syncache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.syncache.domain.[num].wss |
Working set size |
vm.uma.syncache.flags |
Allocator configuration flags |
vm.uma.syncache.keg |
|
vm.uma.syncache.keg.align |
item alignment mask |
vm.uma.syncache.keg.domain |
|
vm.uma.syncache.keg.domain.[num] |
|
vm.uma.syncache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.syncache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.syncache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.syncache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.syncache.keg.ipers |
items available per-slab |
vm.uma.syncache.keg.name |
Keg name |
vm.uma.syncache.keg.ppera |
pages per-slab allocation |
vm.uma.syncache.keg.reserve |
number of reserved items |
vm.uma.syncache.keg.rsize |
Real object size with alignment |
vm.uma.syncache.limit |
|
vm.uma.syncache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.syncache.limit.items |
Current number of allocated items if limit is set |
vm.uma.syncache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.syncache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.syncache.limit.sleeps |
Total zone limit sleeps |
vm.uma.syncache.size |
Allocation size |
vm.uma.syncache.stats |
|
vm.uma.syncache.stats.allocs |
Total allocation calls |
vm.uma.syncache.stats.current |
Current number of allocated items |
vm.uma.syncache.stats.fails |
Number of allocation failures |
vm.uma.syncache.stats.frees |
Total free calls |
vm.uma.syncache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.syncache_[num] |
|
vm.uma.syncache_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.syncache_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.syncache_[num].domain |
|
vm.uma.syncache_[num].domain.[num] |
|
vm.uma.syncache_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.syncache_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.syncache_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.syncache_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.syncache_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.syncache_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.syncache_[num].domain.[num].wss |
Working set size |
vm.uma.syncache_[num].flags |
Allocator configuration flags |
vm.uma.syncache_[num].keg |
|
vm.uma.syncache_[num].keg.align |
item alignment mask |
vm.uma.syncache_[num].keg.domain |
|
vm.uma.syncache_[num].keg.domain.[num] |
|
vm.uma.syncache_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.syncache_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.syncache_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.syncache_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.syncache_[num].keg.ipers |
items available per-slab |
vm.uma.syncache_[num].keg.name |
Keg name |
vm.uma.syncache_[num].keg.ppera |
pages per-slab allocation |
vm.uma.syncache_[num].keg.reserve |
number of reserved items |
vm.uma.syncache_[num].keg.rsize |
Real object size with alignment |
vm.uma.syncache_[num].limit |
|
vm.uma.syncache_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.syncache_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.syncache_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.syncache_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.syncache_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.syncache_[num].size |
Allocation size |
vm.uma.syncache_[num].stats |
|
vm.uma.syncache_[num].stats.allocs |
Total allocation calls |
vm.uma.syncache_[num].stats.current |
Current number of allocated items |
vm.uma.syncache_[num].stats.fails |
Number of allocation failures |
vm.uma.syncache_[num].stats.frees |
Total free calls |
vm.uma.syncache_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.taskq_zone |
|
vm.uma.taskq_zone.bucket_size |
Desired per-cpu cache size |
vm.uma.taskq_zone.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.taskq_zone.domain |
|
vm.uma.taskq_zone.domain.[num] |
|
vm.uma.taskq_zone.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.taskq_zone.domain.[num].imax |
maximum item count in this period |
vm.uma.taskq_zone.domain.[num].imin |
minimum item count in this period |
vm.uma.taskq_zone.domain.[num].limin |
Long time minimum item count |
vm.uma.taskq_zone.domain.[num].nitems |
number of items in this domain |
vm.uma.taskq_zone.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.taskq_zone.domain.[num].wss |
Working set size |
vm.uma.taskq_zone.flags |
Allocator configuration flags |
vm.uma.taskq_zone.keg |
|
vm.uma.taskq_zone.keg.align |
item alignment mask |
vm.uma.taskq_zone.keg.domain |
|
vm.uma.taskq_zone.keg.domain.[num] |
|
vm.uma.taskq_zone.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.taskq_zone.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.taskq_zone.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.taskq_zone.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.taskq_zone.keg.ipers |
items available per-slab |
vm.uma.taskq_zone.keg.name |
Keg name |
vm.uma.taskq_zone.keg.ppera |
pages per-slab allocation |
vm.uma.taskq_zone.keg.reserve |
number of reserved items |
vm.uma.taskq_zone.keg.rsize |
Real object size with alignment |
vm.uma.taskq_zone.limit |
|
vm.uma.taskq_zone.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.taskq_zone.limit.items |
Current number of allocated items if limit is set |
vm.uma.taskq_zone.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.taskq_zone.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.taskq_zone.limit.sleeps |
Total zone limit sleeps |
vm.uma.taskq_zone.size |
Allocation size |
vm.uma.taskq_zone.stats |
|
vm.uma.taskq_zone.stats.allocs |
Total allocation calls |
vm.uma.taskq_zone.stats.current |
Current number of allocated items |
vm.uma.taskq_zone.stats.fails |
Number of allocation failures |
vm.uma.taskq_zone.stats.frees |
Total free calls |
vm.uma.taskq_zone.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tcp_inpcb |
|
vm.uma.tcp_inpcb.bucket_size |
Desired per-cpu cache size |
vm.uma.tcp_inpcb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tcp_inpcb.domain |
|
vm.uma.tcp_inpcb.domain.[num] |
|
vm.uma.tcp_inpcb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tcp_inpcb.domain.[num].imax |
maximum item count in this period |
vm.uma.tcp_inpcb.domain.[num].imin |
minimum item count in this period |
vm.uma.tcp_inpcb.domain.[num].limin |
Long time minimum item count |
vm.uma.tcp_inpcb.domain.[num].nitems |
number of items in this domain |
vm.uma.tcp_inpcb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tcp_inpcb.domain.[num].wss |
Working set size |
vm.uma.tcp_inpcb.flags |
Allocator configuration flags |
vm.uma.tcp_inpcb.keg |
|
vm.uma.tcp_inpcb.keg.align |
item alignment mask |
vm.uma.tcp_inpcb.keg.domain |
|
vm.uma.tcp_inpcb.keg.domain.[num] |
|
vm.uma.tcp_inpcb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tcp_inpcb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tcp_inpcb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tcp_inpcb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tcp_inpcb.keg.ipers |
items available per-slab |
vm.uma.tcp_inpcb.keg.name |
Keg name |
vm.uma.tcp_inpcb.keg.ppera |
pages per-slab allocation |
vm.uma.tcp_inpcb.keg.reserve |
number of reserved items |
vm.uma.tcp_inpcb.keg.rsize |
Real object size with alignment |
vm.uma.tcp_inpcb.limit |
|
vm.uma.tcp_inpcb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tcp_inpcb.limit.items |
Current number of allocated items if limit is set |
vm.uma.tcp_inpcb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tcp_inpcb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tcp_inpcb.limit.sleeps |
Total zone limit sleeps |
vm.uma.tcp_inpcb.size |
Allocation size |
vm.uma.tcp_inpcb.stats |
|
vm.uma.tcp_inpcb.stats.allocs |
Total allocation calls |
vm.uma.tcp_inpcb.stats.current |
Current number of allocated items |
vm.uma.tcp_inpcb.stats.fails |
Number of allocation failures |
vm.uma.tcp_inpcb.stats.frees |
Total free calls |
vm.uma.tcp_inpcb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tcp_inpcb_ports |
|
vm.uma.tcp_inpcb_ports.bucket_size |
Desired per-cpu cache size |
vm.uma.tcp_inpcb_ports.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tcp_inpcb_ports.domain |
|
vm.uma.tcp_inpcb_ports.domain.[num] |
|
vm.uma.tcp_inpcb_ports.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tcp_inpcb_ports.domain.[num].imax |
maximum item count in this period |
vm.uma.tcp_inpcb_ports.domain.[num].imin |
minimum item count in this period |
vm.uma.tcp_inpcb_ports.domain.[num].limin |
Long time minimum item count |
vm.uma.tcp_inpcb_ports.domain.[num].nitems |
number of items in this domain |
vm.uma.tcp_inpcb_ports.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tcp_inpcb_ports.domain.[num].wss |
Working set size |
vm.uma.tcp_inpcb_ports.flags |
Allocator configuration flags |
vm.uma.tcp_inpcb_ports.keg |
|
vm.uma.tcp_inpcb_ports.keg.align |
item alignment mask |
vm.uma.tcp_inpcb_ports.keg.domain |
|
vm.uma.tcp_inpcb_ports.keg.domain.[num] |
|
vm.uma.tcp_inpcb_ports.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tcp_inpcb_ports.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tcp_inpcb_ports.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tcp_inpcb_ports.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tcp_inpcb_ports.keg.ipers |
items available per-slab |
vm.uma.tcp_inpcb_ports.keg.name |
Keg name |
vm.uma.tcp_inpcb_ports.keg.ppera |
pages per-slab allocation |
vm.uma.tcp_inpcb_ports.keg.reserve |
number of reserved items |
vm.uma.tcp_inpcb_ports.keg.rsize |
Real object size with alignment |
vm.uma.tcp_inpcb_ports.limit |
|
vm.uma.tcp_inpcb_ports.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tcp_inpcb_ports.limit.items |
Current number of allocated items if limit is set |
vm.uma.tcp_inpcb_ports.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tcp_inpcb_ports.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tcp_inpcb_ports.limit.sleeps |
Total zone limit sleeps |
vm.uma.tcp_inpcb_ports.size |
Allocation size |
vm.uma.tcp_inpcb_ports.stats |
|
vm.uma.tcp_inpcb_ports.stats.allocs |
Total allocation calls |
vm.uma.tcp_inpcb_ports.stats.current |
Current number of allocated items |
vm.uma.tcp_inpcb_ports.stats.fails |
Number of allocation failures |
vm.uma.tcp_inpcb_ports.stats.frees |
Total free calls |
vm.uma.tcp_inpcb_ports.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tcp_log |
|
vm.uma.tcp_log.bucket_size |
Desired per-cpu cache size |
vm.uma.tcp_log.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tcp_log.domain |
|
vm.uma.tcp_log.domain.[num] |
|
vm.uma.tcp_log.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tcp_log.domain.[num].imax |
maximum item count in this period |
vm.uma.tcp_log.domain.[num].imin |
minimum item count in this period |
vm.uma.tcp_log.domain.[num].limin |
Long time minimum item count |
vm.uma.tcp_log.domain.[num].nitems |
number of items in this domain |
vm.uma.tcp_log.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tcp_log.domain.[num].wss |
Working set size |
vm.uma.tcp_log.flags |
Allocator configuration flags |
vm.uma.tcp_log.keg |
|
vm.uma.tcp_log.keg.align |
item alignment mask |
vm.uma.tcp_log.keg.domain |
|
vm.uma.tcp_log.keg.domain.[num] |
|
vm.uma.tcp_log.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tcp_log.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tcp_log.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tcp_log.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tcp_log.keg.ipers |
items available per-slab |
vm.uma.tcp_log.keg.name |
Keg name |
vm.uma.tcp_log.keg.ppera |
pages per-slab allocation |
vm.uma.tcp_log.keg.reserve |
number of reserved items |
vm.uma.tcp_log.keg.rsize |
Real object size with alignment |
vm.uma.tcp_log.limit |
|
vm.uma.tcp_log.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tcp_log.limit.items |
Current number of allocated items if limit is set |
vm.uma.tcp_log.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tcp_log.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tcp_log.limit.sleeps |
Total zone limit sleeps |
vm.uma.tcp_log.size |
Allocation size |
vm.uma.tcp_log.stats |
|
vm.uma.tcp_log.stats.allocs |
Total allocation calls |
vm.uma.tcp_log.stats.current |
Current number of allocated items |
vm.uma.tcp_log.stats.fails |
Number of allocation failures |
vm.uma.tcp_log.stats.frees |
Total free calls |
vm.uma.tcp_log.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tcp_log_id_bucket |
|
vm.uma.tcp_log_id_bucket.bucket_size |
Desired per-cpu cache size |
vm.uma.tcp_log_id_bucket.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tcp_log_id_bucket.domain |
|
vm.uma.tcp_log_id_bucket.domain.[num] |
|
vm.uma.tcp_log_id_bucket.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tcp_log_id_bucket.domain.[num].imax |
maximum item count in this period |
vm.uma.tcp_log_id_bucket.domain.[num].imin |
minimum item count in this period |
vm.uma.tcp_log_id_bucket.domain.[num].limin |
Long time minimum item count |
vm.uma.tcp_log_id_bucket.domain.[num].nitems |
number of items in this domain |
vm.uma.tcp_log_id_bucket.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tcp_log_id_bucket.domain.[num].wss |
Working set size |
vm.uma.tcp_log_id_bucket.flags |
Allocator configuration flags |
vm.uma.tcp_log_id_bucket.keg |
|
vm.uma.tcp_log_id_bucket.keg.align |
item alignment mask |
vm.uma.tcp_log_id_bucket.keg.domain |
|
vm.uma.tcp_log_id_bucket.keg.domain.[num] |
|
vm.uma.tcp_log_id_bucket.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tcp_log_id_bucket.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tcp_log_id_bucket.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tcp_log_id_bucket.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tcp_log_id_bucket.keg.ipers |
items available per-slab |
vm.uma.tcp_log_id_bucket.keg.name |
Keg name |
vm.uma.tcp_log_id_bucket.keg.ppera |
pages per-slab allocation |
vm.uma.tcp_log_id_bucket.keg.reserve |
number of reserved items |
vm.uma.tcp_log_id_bucket.keg.rsize |
Real object size with alignment |
vm.uma.tcp_log_id_bucket.limit |
|
vm.uma.tcp_log_id_bucket.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tcp_log_id_bucket.limit.items |
Current number of allocated items if limit is set |
vm.uma.tcp_log_id_bucket.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tcp_log_id_bucket.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tcp_log_id_bucket.limit.sleeps |
Total zone limit sleeps |
vm.uma.tcp_log_id_bucket.size |
Allocation size |
vm.uma.tcp_log_id_bucket.stats |
|
vm.uma.tcp_log_id_bucket.stats.allocs |
Total allocation calls |
vm.uma.tcp_log_id_bucket.stats.current |
Current number of allocated items |
vm.uma.tcp_log_id_bucket.stats.fails |
Number of allocation failures |
vm.uma.tcp_log_id_bucket.stats.frees |
Total free calls |
vm.uma.tcp_log_id_bucket.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tcp_log_id_node |
|
vm.uma.tcp_log_id_node.bucket_size |
Desired per-cpu cache size |
vm.uma.tcp_log_id_node.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tcp_log_id_node.domain |
|
vm.uma.tcp_log_id_node.domain.[num] |
|
vm.uma.tcp_log_id_node.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tcp_log_id_node.domain.[num].imax |
maximum item count in this period |
vm.uma.tcp_log_id_node.domain.[num].imin |
minimum item count in this period |
vm.uma.tcp_log_id_node.domain.[num].limin |
Long time minimum item count |
vm.uma.tcp_log_id_node.domain.[num].nitems |
number of items in this domain |
vm.uma.tcp_log_id_node.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tcp_log_id_node.domain.[num].wss |
Working set size |
vm.uma.tcp_log_id_node.flags |
Allocator configuration flags |
vm.uma.tcp_log_id_node.keg |
|
vm.uma.tcp_log_id_node.keg.align |
item alignment mask |
vm.uma.tcp_log_id_node.keg.domain |
|
vm.uma.tcp_log_id_node.keg.domain.[num] |
|
vm.uma.tcp_log_id_node.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tcp_log_id_node.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tcp_log_id_node.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tcp_log_id_node.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tcp_log_id_node.keg.ipers |
items available per-slab |
vm.uma.tcp_log_id_node.keg.name |
Keg name |
vm.uma.tcp_log_id_node.keg.ppera |
pages per-slab allocation |
vm.uma.tcp_log_id_node.keg.reserve |
number of reserved items |
vm.uma.tcp_log_id_node.keg.rsize |
Real object size with alignment |
vm.uma.tcp_log_id_node.limit |
|
vm.uma.tcp_log_id_node.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tcp_log_id_node.limit.items |
Current number of allocated items if limit is set |
vm.uma.tcp_log_id_node.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tcp_log_id_node.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tcp_log_id_node.limit.sleeps |
Total zone limit sleeps |
vm.uma.tcp_log_id_node.size |
Allocation size |
vm.uma.tcp_log_id_node.stats |
|
vm.uma.tcp_log_id_node.stats.allocs |
Total allocation calls |
vm.uma.tcp_log_id_node.stats.current |
Current number of allocated items |
vm.uma.tcp_log_id_node.stats.fails |
Number of allocation failures |
vm.uma.tcp_log_id_node.stats.frees |
Total free calls |
vm.uma.tcp_log_id_node.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tcpcb |
|
vm.uma.tcpcb.bucket_size |
Desired per-cpu cache size |
vm.uma.tcpcb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tcpcb.domain |
|
vm.uma.tcpcb.domain.[num] |
|
vm.uma.tcpcb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tcpcb.domain.[num].imax |
maximum item count in this period |
vm.uma.tcpcb.domain.[num].imin |
minimum item count in this period |
vm.uma.tcpcb.domain.[num].limin |
Long time minimum item count |
vm.uma.tcpcb.domain.[num].nitems |
number of items in this domain |
vm.uma.tcpcb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tcpcb.domain.[num].wss |
Working set size |
vm.uma.tcpcb.flags |
Allocator configuration flags |
vm.uma.tcpcb.keg |
|
vm.uma.tcpcb.keg.align |
item alignment mask |
vm.uma.tcpcb.keg.domain |
|
vm.uma.tcpcb.keg.domain.[num] |
|
vm.uma.tcpcb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tcpcb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tcpcb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tcpcb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tcpcb.keg.ipers |
items available per-slab |
vm.uma.tcpcb.keg.name |
Keg name |
vm.uma.tcpcb.keg.ppera |
pages per-slab allocation |
vm.uma.tcpcb.keg.reserve |
number of reserved items |
vm.uma.tcpcb.keg.rsize |
Real object size with alignment |
vm.uma.tcpcb.limit |
|
vm.uma.tcpcb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tcpcb.limit.items |
Current number of allocated items if limit is set |
vm.uma.tcpcb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tcpcb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tcpcb.limit.sleeps |
Total zone limit sleeps |
vm.uma.tcpcb.size |
Allocation size |
vm.uma.tcpcb.stats |
|
vm.uma.tcpcb.stats.allocs |
Total allocation calls |
vm.uma.tcpcb.stats.current |
Current number of allocated items |
vm.uma.tcpcb.stats.fails |
Number of allocation failures |
vm.uma.tcpcb.stats.frees |
Total free calls |
vm.uma.tcpcb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tcpreass |
|
vm.uma.tcpreass.bucket_size |
Desired per-cpu cache size |
vm.uma.tcpreass.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tcpreass.domain |
|
vm.uma.tcpreass.domain.[num] |
|
vm.uma.tcpreass.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tcpreass.domain.[num].imax |
maximum item count in this period |
vm.uma.tcpreass.domain.[num].imin |
minimum item count in this period |
vm.uma.tcpreass.domain.[num].limin |
Long time minimum item count |
vm.uma.tcpreass.domain.[num].nitems |
number of items in this domain |
vm.uma.tcpreass.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tcpreass.domain.[num].wss |
Working set size |
vm.uma.tcpreass.flags |
Allocator configuration flags |
vm.uma.tcpreass.keg |
|
vm.uma.tcpreass.keg.align |
item alignment mask |
vm.uma.tcpreass.keg.domain |
|
vm.uma.tcpreass.keg.domain.[num] |
|
vm.uma.tcpreass.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tcpreass.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tcpreass.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tcpreass.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tcpreass.keg.ipers |
items available per-slab |
vm.uma.tcpreass.keg.name |
Keg name |
vm.uma.tcpreass.keg.ppera |
pages per-slab allocation |
vm.uma.tcpreass.keg.reserve |
number of reserved items |
vm.uma.tcpreass.keg.rsize |
Real object size with alignment |
vm.uma.tcpreass.limit |
|
vm.uma.tcpreass.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tcpreass.limit.items |
Current number of allocated items if limit is set |
vm.uma.tcpreass.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tcpreass.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tcpreass.limit.sleeps |
Total zone limit sleeps |
vm.uma.tcpreass.size |
Allocation size |
vm.uma.tcpreass.stats |
|
vm.uma.tcpreass.stats.allocs |
Total allocation calls |
vm.uma.tcpreass.stats.current |
Current number of allocated items |
vm.uma.tcpreass.stats.fails |
Number of allocation failures |
vm.uma.tcpreass.stats.frees |
Total free calls |
vm.uma.tcpreass.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tcptw |
|
vm.uma.tcptw.bucket_size |
Desired per-cpu cache size |
vm.uma.tcptw.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tcptw.domain |
|
vm.uma.tcptw.domain.[num] |
|
vm.uma.tcptw.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tcptw.domain.[num].imax |
maximum item count in this period |
vm.uma.tcptw.domain.[num].imin |
minimum item count in this period |
vm.uma.tcptw.domain.[num].limin |
Long time minimum item count |
vm.uma.tcptw.domain.[num].nitems |
number of items in this domain |
vm.uma.tcptw.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tcptw.domain.[num].wss |
Working set size |
vm.uma.tcptw.flags |
Allocator configuration flags |
vm.uma.tcptw.keg |
|
vm.uma.tcptw.keg.align |
item alignment mask |
vm.uma.tcptw.keg.domain |
|
vm.uma.tcptw.keg.domain.[num] |
|
vm.uma.tcptw.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tcptw.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tcptw.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tcptw.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tcptw.keg.ipers |
items available per-slab |
vm.uma.tcptw.keg.name |
Keg name |
vm.uma.tcptw.keg.ppera |
pages per-slab allocation |
vm.uma.tcptw.keg.reserve |
number of reserved items |
vm.uma.tcptw.keg.rsize |
Real object size with alignment |
vm.uma.tcptw.limit |
|
vm.uma.tcptw.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tcptw.limit.items |
Current number of allocated items if limit is set |
vm.uma.tcptw.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tcptw.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tcptw.limit.sleeps |
Total zone limit sleeps |
vm.uma.tcptw.size |
Allocation size |
vm.uma.tcptw.stats |
|
vm.uma.tcptw.stats.allocs |
Total allocation calls |
vm.uma.tcptw.stats.current |
Current number of allocated items |
vm.uma.tcptw.stats.fails |
Number of allocation failures |
vm.uma.tcptw.stats.frees |
Total free calls |
vm.uma.tcptw.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tfo |
|
vm.uma.tfo.bucket_size |
Desired per-cpu cache size |
vm.uma.tfo.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tfo.domain |
|
vm.uma.tfo.domain.[num] |
|
vm.uma.tfo.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tfo.domain.[num].imax |
maximum item count in this period |
vm.uma.tfo.domain.[num].imin |
minimum item count in this period |
vm.uma.tfo.domain.[num].limin |
Long time minimum item count |
vm.uma.tfo.domain.[num].nitems |
number of items in this domain |
vm.uma.tfo.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tfo.domain.[num].wss |
Working set size |
vm.uma.tfo.flags |
Allocator configuration flags |
vm.uma.tfo.keg |
|
vm.uma.tfo.keg.align |
item alignment mask |
vm.uma.tfo.keg.domain |
|
vm.uma.tfo.keg.domain.[num] |
|
vm.uma.tfo.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tfo.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tfo.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tfo.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tfo.keg.ipers |
items available per-slab |
vm.uma.tfo.keg.name |
Keg name |
vm.uma.tfo.keg.ppera |
pages per-slab allocation |
vm.uma.tfo.keg.reserve |
number of reserved items |
vm.uma.tfo.keg.rsize |
Real object size with alignment |
vm.uma.tfo.limit |
|
vm.uma.tfo.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tfo.limit.items |
Current number of allocated items if limit is set |
vm.uma.tfo.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tfo.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tfo.limit.sleeps |
Total zone limit sleeps |
vm.uma.tfo.size |
Allocation size |
vm.uma.tfo.stats |
|
vm.uma.tfo.stats.allocs |
Total allocation calls |
vm.uma.tfo.stats.current |
Current number of allocated items |
vm.uma.tfo.stats.fails |
Number of allocation failures |
vm.uma.tfo.stats.frees |
Total free calls |
vm.uma.tfo.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tfo_[num] |
|
vm.uma.tfo_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.tfo_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tfo_[num].domain |
|
vm.uma.tfo_[num].domain.[num] |
|
vm.uma.tfo_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tfo_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.tfo_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.tfo_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.tfo_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.tfo_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tfo_[num].domain.[num].wss |
Working set size |
vm.uma.tfo_[num].flags |
Allocator configuration flags |
vm.uma.tfo_[num].keg |
|
vm.uma.tfo_[num].keg.align |
item alignment mask |
vm.uma.tfo_[num].keg.domain |
|
vm.uma.tfo_[num].keg.domain.[num] |
|
vm.uma.tfo_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tfo_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tfo_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tfo_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tfo_[num].keg.ipers |
items available per-slab |
vm.uma.tfo_[num].keg.name |
Keg name |
vm.uma.tfo_[num].keg.ppera |
pages per-slab allocation |
vm.uma.tfo_[num].keg.reserve |
number of reserved items |
vm.uma.tfo_[num].keg.rsize |
Real object size with alignment |
vm.uma.tfo_[num].limit |
|
vm.uma.tfo_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tfo_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.tfo_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tfo_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tfo_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.tfo_[num].size |
Allocation size |
vm.uma.tfo_[num].stats |
|
vm.uma.tfo_[num].stats.allocs |
Total allocation calls |
vm.uma.tfo_[num].stats.current |
Current number of allocated items |
vm.uma.tfo_[num].stats.fails |
Number of allocation failures |
vm.uma.tfo_[num].stats.frees |
Total free calls |
vm.uma.tfo_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.tfo_ccache_entries |
|
vm.uma.tfo_ccache_entries.bucket_size |
Desired per-cpu cache size |
vm.uma.tfo_ccache_entries.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tfo_ccache_entries.domain |
|
vm.uma.tfo_ccache_entries.domain.[num] |
|
vm.uma.tfo_ccache_entries.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tfo_ccache_entries.domain.[num].imax |
maximum item count in this period |
vm.uma.tfo_ccache_entries.domain.[num].imin |
minimum item count in this period |
vm.uma.tfo_ccache_entries.domain.[num].limin |
Long time minimum item count |
vm.uma.tfo_ccache_entries.domain.[num].nitems |
number of items in this domain |
vm.uma.tfo_ccache_entries.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tfo_ccache_entries.domain.[num].wss |
Working set size |
vm.uma.tfo_ccache_entries.flags |
Allocator configuration flags |
vm.uma.tfo_ccache_entries.keg |
|
vm.uma.tfo_ccache_entries.keg.align |
item alignment mask |
vm.uma.tfo_ccache_entries.keg.domain |
|
vm.uma.tfo_ccache_entries.keg.domain.[num] |
|
vm.uma.tfo_ccache_entries.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tfo_ccache_entries.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tfo_ccache_entries.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tfo_ccache_entries.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tfo_ccache_entries.keg.ipers |
items available per-slab |
vm.uma.tfo_ccache_entries.keg.name |
Keg name |
vm.uma.tfo_ccache_entries.keg.ppera |
pages per-slab allocation |
vm.uma.tfo_ccache_entries.keg.reserve |
number of reserved items |
vm.uma.tfo_ccache_entries.keg.rsize |
Real object size with alignment |
vm.uma.tfo_ccache_entries.limit |
|
vm.uma.tfo_ccache_entries.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tfo_ccache_entries.limit.items |
Current number of allocated items if limit is set |
vm.uma.tfo_ccache_entries.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tfo_ccache_entries.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tfo_ccache_entries.limit.sleeps |
Total zone limit sleeps |
vm.uma.tfo_ccache_entries.size |
Allocation size |
vm.uma.tfo_ccache_entries.stats |
|
vm.uma.tfo_ccache_entries.stats.allocs |
Total allocation calls |
vm.uma.tfo_ccache_entries.stats.current |
Current number of allocated items |
vm.uma.tfo_ccache_entries.stats.fails |
Number of allocation failures |
vm.uma.tfo_ccache_entries.stats.frees |
Total free calls |
vm.uma.tfo_ccache_entries.stats.xdomain |
Free calls from the wrong domain |
vm.uma.tfo_ccache_entries_[num] |
|
vm.uma.tfo_ccache_entries_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.tfo_ccache_entries_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.tfo_ccache_entries_[num].domain |
|
vm.uma.tfo_ccache_entries_[num].domain.[num] |
|
vm.uma.tfo_ccache_entries_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.tfo_ccache_entries_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.tfo_ccache_entries_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.tfo_ccache_entries_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.tfo_ccache_entries_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.tfo_ccache_entries_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.tfo_ccache_entries_[num].domain.[num].wss |
Working set size |
vm.uma.tfo_ccache_entries_[num].flags |
Allocator configuration flags |
vm.uma.tfo_ccache_entries_[num].keg |
|
vm.uma.tfo_ccache_entries_[num].keg.align |
item alignment mask |
vm.uma.tfo_ccache_entries_[num].keg.domain |
|
vm.uma.tfo_ccache_entries_[num].keg.domain.[num] |
|
vm.uma.tfo_ccache_entries_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.tfo_ccache_entries_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.tfo_ccache_entries_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.tfo_ccache_entries_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.tfo_ccache_entries_[num].keg.ipers |
items available per-slab |
vm.uma.tfo_ccache_entries_[num].keg.name |
Keg name |
vm.uma.tfo_ccache_entries_[num].keg.ppera |
pages per-slab allocation |
vm.uma.tfo_ccache_entries_[num].keg.reserve |
number of reserved items |
vm.uma.tfo_ccache_entries_[num].keg.rsize |
Real object size with alignment |
vm.uma.tfo_ccache_entries_[num].limit |
|
vm.uma.tfo_ccache_entries_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.tfo_ccache_entries_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.tfo_ccache_entries_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.tfo_ccache_entries_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.tfo_ccache_entries_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.tfo_ccache_entries_[num].size |
Allocation size |
vm.uma.tfo_ccache_entries_[num].stats |
|
vm.uma.tfo_ccache_entries_[num].stats.allocs |
Total allocation calls |
vm.uma.tfo_ccache_entries_[num].stats.current |
Current number of allocated items |
vm.uma.tfo_ccache_entries_[num].stats.fails |
Number of allocation failures |
vm.uma.tfo_ccache_entries_[num].stats.frees |
Total free calls |
vm.uma.tfo_ccache_entries_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.ttyinq |
|
vm.uma.ttyinq.bucket_size |
Desired per-cpu cache size |
vm.uma.ttyinq.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ttyinq.domain |
|
vm.uma.ttyinq.domain.[num] |
|
vm.uma.ttyinq.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ttyinq.domain.[num].imax |
maximum item count in this period |
vm.uma.ttyinq.domain.[num].imin |
minimum item count in this period |
vm.uma.ttyinq.domain.[num].limin |
Long time minimum item count |
vm.uma.ttyinq.domain.[num].nitems |
number of items in this domain |
vm.uma.ttyinq.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ttyinq.domain.[num].wss |
Working set size |
vm.uma.ttyinq.flags |
Allocator configuration flags |
vm.uma.ttyinq.keg |
|
vm.uma.ttyinq.keg.align |
item alignment mask |
vm.uma.ttyinq.keg.domain |
|
vm.uma.ttyinq.keg.domain.[num] |
|
vm.uma.ttyinq.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ttyinq.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ttyinq.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ttyinq.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ttyinq.keg.ipers |
items available per-slab |
vm.uma.ttyinq.keg.name |
Keg name |
vm.uma.ttyinq.keg.ppera |
pages per-slab allocation |
vm.uma.ttyinq.keg.reserve |
number of reserved items |
vm.uma.ttyinq.keg.rsize |
Real object size with alignment |
vm.uma.ttyinq.limit |
|
vm.uma.ttyinq.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ttyinq.limit.items |
Current number of allocated items if limit is set |
vm.uma.ttyinq.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ttyinq.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ttyinq.limit.sleeps |
Total zone limit sleeps |
vm.uma.ttyinq.size |
Allocation size |
vm.uma.ttyinq.stats |
|
vm.uma.ttyinq.stats.allocs |
Total allocation calls |
vm.uma.ttyinq.stats.current |
Current number of allocated items |
vm.uma.ttyinq.stats.fails |
Number of allocation failures |
vm.uma.ttyinq.stats.frees |
Total free calls |
vm.uma.ttyinq.stats.xdomain |
Free calls from the wrong domain |
vm.uma.ttyoutq |
|
vm.uma.ttyoutq.bucket_size |
Desired per-cpu cache size |
vm.uma.ttyoutq.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.ttyoutq.domain |
|
vm.uma.ttyoutq.domain.[num] |
|
vm.uma.ttyoutq.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.ttyoutq.domain.[num].imax |
maximum item count in this period |
vm.uma.ttyoutq.domain.[num].imin |
minimum item count in this period |
vm.uma.ttyoutq.domain.[num].limin |
Long time minimum item count |
vm.uma.ttyoutq.domain.[num].nitems |
number of items in this domain |
vm.uma.ttyoutq.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.ttyoutq.domain.[num].wss |
Working set size |
vm.uma.ttyoutq.flags |
Allocator configuration flags |
vm.uma.ttyoutq.keg |
|
vm.uma.ttyoutq.keg.align |
item alignment mask |
vm.uma.ttyoutq.keg.domain |
|
vm.uma.ttyoutq.keg.domain.[num] |
|
vm.uma.ttyoutq.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.ttyoutq.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.ttyoutq.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.ttyoutq.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.ttyoutq.keg.ipers |
items available per-slab |
vm.uma.ttyoutq.keg.name |
Keg name |
vm.uma.ttyoutq.keg.ppera |
pages per-slab allocation |
vm.uma.ttyoutq.keg.reserve |
number of reserved items |
vm.uma.ttyoutq.keg.rsize |
Real object size with alignment |
vm.uma.ttyoutq.limit |
|
vm.uma.ttyoutq.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.ttyoutq.limit.items |
Current number of allocated items if limit is set |
vm.uma.ttyoutq.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.ttyoutq.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.ttyoutq.limit.sleeps |
Total zone limit sleeps |
vm.uma.ttyoutq.size |
Allocation size |
vm.uma.ttyoutq.stats |
|
vm.uma.ttyoutq.stats.allocs |
Total allocation calls |
vm.uma.ttyoutq.stats.current |
Current number of allocated items |
vm.uma.ttyoutq.stats.fails |
Number of allocation failures |
vm.uma.ttyoutq.stats.frees |
Total free calls |
vm.uma.ttyoutq.stats.xdomain |
Free calls from the wrong domain |
vm.uma.udp_inpcb |
|
vm.uma.udp_inpcb.bucket_size |
Desired per-cpu cache size |
vm.uma.udp_inpcb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.udp_inpcb.domain |
|
vm.uma.udp_inpcb.domain.[num] |
|
vm.uma.udp_inpcb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.udp_inpcb.domain.[num].imax |
maximum item count in this period |
vm.uma.udp_inpcb.domain.[num].imin |
minimum item count in this period |
vm.uma.udp_inpcb.domain.[num].limin |
Long time minimum item count |
vm.uma.udp_inpcb.domain.[num].nitems |
number of items in this domain |
vm.uma.udp_inpcb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.udp_inpcb.domain.[num].wss |
Working set size |
vm.uma.udp_inpcb.flags |
Allocator configuration flags |
vm.uma.udp_inpcb.keg |
|
vm.uma.udp_inpcb.keg.align |
item alignment mask |
vm.uma.udp_inpcb.keg.domain |
|
vm.uma.udp_inpcb.keg.domain.[num] |
|
vm.uma.udp_inpcb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.udp_inpcb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.udp_inpcb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.udp_inpcb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.udp_inpcb.keg.ipers |
items available per-slab |
vm.uma.udp_inpcb.keg.name |
Keg name |
vm.uma.udp_inpcb.keg.ppera |
pages per-slab allocation |
vm.uma.udp_inpcb.keg.reserve |
number of reserved items |
vm.uma.udp_inpcb.keg.rsize |
Real object size with alignment |
vm.uma.udp_inpcb.limit |
|
vm.uma.udp_inpcb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.udp_inpcb.limit.items |
Current number of allocated items if limit is set |
vm.uma.udp_inpcb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.udp_inpcb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.udp_inpcb.limit.sleeps |
Total zone limit sleeps |
vm.uma.udp_inpcb.size |
Allocation size |
vm.uma.udp_inpcb.stats |
|
vm.uma.udp_inpcb.stats.allocs |
Total allocation calls |
vm.uma.udp_inpcb.stats.current |
Current number of allocated items |
vm.uma.udp_inpcb.stats.fails |
Number of allocation failures |
vm.uma.udp_inpcb.stats.frees |
Total free calls |
vm.uma.udp_inpcb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.udp_inpcb_ports |
|
vm.uma.udp_inpcb_ports.bucket_size |
Desired per-cpu cache size |
vm.uma.udp_inpcb_ports.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.udp_inpcb_ports.domain |
|
vm.uma.udp_inpcb_ports.domain.[num] |
|
vm.uma.udp_inpcb_ports.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.udp_inpcb_ports.domain.[num].imax |
maximum item count in this period |
vm.uma.udp_inpcb_ports.domain.[num].imin |
minimum item count in this period |
vm.uma.udp_inpcb_ports.domain.[num].limin |
Long time minimum item count |
vm.uma.udp_inpcb_ports.domain.[num].nitems |
number of items in this domain |
vm.uma.udp_inpcb_ports.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.udp_inpcb_ports.domain.[num].wss |
Working set size |
vm.uma.udp_inpcb_ports.flags |
Allocator configuration flags |
vm.uma.udp_inpcb_ports.keg |
|
vm.uma.udp_inpcb_ports.keg.align |
item alignment mask |
vm.uma.udp_inpcb_ports.keg.domain |
|
vm.uma.udp_inpcb_ports.keg.domain.[num] |
|
vm.uma.udp_inpcb_ports.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.udp_inpcb_ports.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.udp_inpcb_ports.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.udp_inpcb_ports.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.udp_inpcb_ports.keg.ipers |
items available per-slab |
vm.uma.udp_inpcb_ports.keg.name |
Keg name |
vm.uma.udp_inpcb_ports.keg.ppera |
pages per-slab allocation |
vm.uma.udp_inpcb_ports.keg.reserve |
number of reserved items |
vm.uma.udp_inpcb_ports.keg.rsize |
Real object size with alignment |
vm.uma.udp_inpcb_ports.limit |
|
vm.uma.udp_inpcb_ports.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.udp_inpcb_ports.limit.items |
Current number of allocated items if limit is set |
vm.uma.udp_inpcb_ports.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.udp_inpcb_ports.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.udp_inpcb_ports.limit.sleeps |
Total zone limit sleeps |
vm.uma.udp_inpcb_ports.size |
Allocation size |
vm.uma.udp_inpcb_ports.stats |
|
vm.uma.udp_inpcb_ports.stats.allocs |
Total allocation calls |
vm.uma.udp_inpcb_ports.stats.current |
Current number of allocated items |
vm.uma.udp_inpcb_ports.stats.fails |
Number of allocation failures |
vm.uma.udp_inpcb_ports.stats.frees |
Total free calls |
vm.uma.udp_inpcb_ports.stats.xdomain |
Free calls from the wrong domain |
vm.uma.udpcb |
|
vm.uma.udpcb.bucket_size |
Desired per-cpu cache size |
vm.uma.udpcb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.udpcb.domain |
|
vm.uma.udpcb.domain.[num] |
|
vm.uma.udpcb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.udpcb.domain.[num].imax |
maximum item count in this period |
vm.uma.udpcb.domain.[num].imin |
minimum item count in this period |
vm.uma.udpcb.domain.[num].limin |
Long time minimum item count |
vm.uma.udpcb.domain.[num].nitems |
number of items in this domain |
vm.uma.udpcb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.udpcb.domain.[num].wss |
Working set size |
vm.uma.udpcb.flags |
Allocator configuration flags |
vm.uma.udpcb.keg |
|
vm.uma.udpcb.keg.align |
item alignment mask |
vm.uma.udpcb.keg.domain |
|
vm.uma.udpcb.keg.domain.[num] |
|
vm.uma.udpcb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.udpcb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.udpcb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.udpcb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.udpcb.keg.ipers |
items available per-slab |
vm.uma.udpcb.keg.name |
Keg name |
vm.uma.udpcb.keg.ppera |
pages per-slab allocation |
vm.uma.udpcb.keg.reserve |
number of reserved items |
vm.uma.udpcb.keg.rsize |
Real object size with alignment |
vm.uma.udpcb.limit |
|
vm.uma.udpcb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.udpcb.limit.items |
Current number of allocated items if limit is set |
vm.uma.udpcb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.udpcb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.udpcb.limit.sleeps |
Total zone limit sleeps |
vm.uma.udpcb.size |
Allocation size |
vm.uma.udpcb.stats |
|
vm.uma.udpcb.stats.allocs |
Total allocation calls |
vm.uma.udpcb.stats.current |
Current number of allocated items |
vm.uma.udpcb.stats.fails |
Number of allocation failures |
vm.uma.udpcb.stats.frees |
Total free calls |
vm.uma.udpcb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.udplite_inpcb |
|
vm.uma.udplite_inpcb.bucket_size |
Desired per-cpu cache size |
vm.uma.udplite_inpcb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.udplite_inpcb.domain |
|
vm.uma.udplite_inpcb.domain.[num] |
|
vm.uma.udplite_inpcb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.udplite_inpcb.domain.[num].imax |
maximum item count in this period |
vm.uma.udplite_inpcb.domain.[num].imin |
minimum item count in this period |
vm.uma.udplite_inpcb.domain.[num].limin |
Long time minimum item count |
vm.uma.udplite_inpcb.domain.[num].nitems |
number of items in this domain |
vm.uma.udplite_inpcb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.udplite_inpcb.domain.[num].wss |
Working set size |
vm.uma.udplite_inpcb.flags |
Allocator configuration flags |
vm.uma.udplite_inpcb.keg |
|
vm.uma.udplite_inpcb.keg.align |
item alignment mask |
vm.uma.udplite_inpcb.keg.domain |
|
vm.uma.udplite_inpcb.keg.domain.[num] |
|
vm.uma.udplite_inpcb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.udplite_inpcb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.udplite_inpcb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.udplite_inpcb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.udplite_inpcb.keg.ipers |
items available per-slab |
vm.uma.udplite_inpcb.keg.name |
Keg name |
vm.uma.udplite_inpcb.keg.ppera |
pages per-slab allocation |
vm.uma.udplite_inpcb.keg.reserve |
number of reserved items |
vm.uma.udplite_inpcb.keg.rsize |
Real object size with alignment |
vm.uma.udplite_inpcb.limit |
|
vm.uma.udplite_inpcb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.udplite_inpcb.limit.items |
Current number of allocated items if limit is set |
vm.uma.udplite_inpcb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.udplite_inpcb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.udplite_inpcb.limit.sleeps |
Total zone limit sleeps |
vm.uma.udplite_inpcb.size |
Allocation size |
vm.uma.udplite_inpcb.stats |
|
vm.uma.udplite_inpcb.stats.allocs |
Total allocation calls |
vm.uma.udplite_inpcb.stats.current |
Current number of allocated items |
vm.uma.udplite_inpcb.stats.fails |
Number of allocation failures |
vm.uma.udplite_inpcb.stats.frees |
Total free calls |
vm.uma.udplite_inpcb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.udplite_inpcb_ports |
|
vm.uma.udplite_inpcb_ports.bucket_size |
Desired per-cpu cache size |
vm.uma.udplite_inpcb_ports.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.udplite_inpcb_ports.domain |
|
vm.uma.udplite_inpcb_ports.domain.[num] |
|
vm.uma.udplite_inpcb_ports.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.udplite_inpcb_ports.domain.[num].imax |
maximum item count in this period |
vm.uma.udplite_inpcb_ports.domain.[num].imin |
minimum item count in this period |
vm.uma.udplite_inpcb_ports.domain.[num].limin |
Long time minimum item count |
vm.uma.udplite_inpcb_ports.domain.[num].nitems |
number of items in this domain |
vm.uma.udplite_inpcb_ports.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.udplite_inpcb_ports.domain.[num].wss |
Working set size |
vm.uma.udplite_inpcb_ports.flags |
Allocator configuration flags |
vm.uma.udplite_inpcb_ports.keg |
|
vm.uma.udplite_inpcb_ports.keg.align |
item alignment mask |
vm.uma.udplite_inpcb_ports.keg.domain |
|
vm.uma.udplite_inpcb_ports.keg.domain.[num] |
|
vm.uma.udplite_inpcb_ports.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.udplite_inpcb_ports.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.udplite_inpcb_ports.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.udplite_inpcb_ports.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.udplite_inpcb_ports.keg.ipers |
items available per-slab |
vm.uma.udplite_inpcb_ports.keg.name |
Keg name |
vm.uma.udplite_inpcb_ports.keg.ppera |
pages per-slab allocation |
vm.uma.udplite_inpcb_ports.keg.reserve |
number of reserved items |
vm.uma.udplite_inpcb_ports.keg.rsize |
Real object size with alignment |
vm.uma.udplite_inpcb_ports.limit |
|
vm.uma.udplite_inpcb_ports.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.udplite_inpcb_ports.limit.items |
Current number of allocated items if limit is set |
vm.uma.udplite_inpcb_ports.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.udplite_inpcb_ports.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.udplite_inpcb_ports.limit.sleeps |
Total zone limit sleeps |
vm.uma.udplite_inpcb_ports.size |
Allocation size |
vm.uma.udplite_inpcb_ports.stats |
|
vm.uma.udplite_inpcb_ports.stats.allocs |
Total allocation calls |
vm.uma.udplite_inpcb_ports.stats.current |
Current number of allocated items |
vm.uma.udplite_inpcb_ports.stats.fails |
Number of allocation failures |
vm.uma.udplite_inpcb_ports.stats.frees |
Total free calls |
vm.uma.udplite_inpcb_ports.stats.xdomain |
Free calls from the wrong domain |
vm.uma.umtx_pi |
|
vm.uma.umtx_pi.bucket_size |
Desired per-cpu cache size |
vm.uma.umtx_pi.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.umtx_pi.domain |
|
vm.uma.umtx_pi.domain.[num] |
|
vm.uma.umtx_pi.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.umtx_pi.domain.[num].imax |
maximum item count in this period |
vm.uma.umtx_pi.domain.[num].imin |
minimum item count in this period |
vm.uma.umtx_pi.domain.[num].limin |
Long time minimum item count |
vm.uma.umtx_pi.domain.[num].nitems |
number of items in this domain |
vm.uma.umtx_pi.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.umtx_pi.domain.[num].wss |
Working set size |
vm.uma.umtx_pi.flags |
Allocator configuration flags |
vm.uma.umtx_pi.keg |
|
vm.uma.umtx_pi.keg.align |
item alignment mask |
vm.uma.umtx_pi.keg.domain |
|
vm.uma.umtx_pi.keg.domain.[num] |
|
vm.uma.umtx_pi.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.umtx_pi.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.umtx_pi.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.umtx_pi.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.umtx_pi.keg.ipers |
items available per-slab |
vm.uma.umtx_pi.keg.name |
Keg name |
vm.uma.umtx_pi.keg.ppera |
pages per-slab allocation |
vm.uma.umtx_pi.keg.reserve |
number of reserved items |
vm.uma.umtx_pi.keg.rsize |
Real object size with alignment |
vm.uma.umtx_pi.limit |
|
vm.uma.umtx_pi.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.umtx_pi.limit.items |
Current number of allocated items if limit is set |
vm.uma.umtx_pi.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.umtx_pi.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.umtx_pi.limit.sleeps |
Total zone limit sleeps |
vm.uma.umtx_pi.size |
Allocation size |
vm.uma.umtx_pi.stats |
|
vm.uma.umtx_pi.stats.allocs |
Total allocation calls |
vm.uma.umtx_pi.stats.current |
Current number of allocated items |
vm.uma.umtx_pi.stats.fails |
Number of allocation failures |
vm.uma.umtx_pi.stats.frees |
Total free calls |
vm.uma.umtx_pi.stats.xdomain |
Free calls from the wrong domain |
vm.uma.umtx_shm |
|
vm.uma.umtx_shm.bucket_size |
Desired per-cpu cache size |
vm.uma.umtx_shm.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.umtx_shm.domain |
|
vm.uma.umtx_shm.domain.[num] |
|
vm.uma.umtx_shm.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.umtx_shm.domain.[num].imax |
maximum item count in this period |
vm.uma.umtx_shm.domain.[num].imin |
minimum item count in this period |
vm.uma.umtx_shm.domain.[num].limin |
Long time minimum item count |
vm.uma.umtx_shm.domain.[num].nitems |
number of items in this domain |
vm.uma.umtx_shm.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.umtx_shm.domain.[num].wss |
Working set size |
vm.uma.umtx_shm.flags |
Allocator configuration flags |
vm.uma.umtx_shm.keg |
|
vm.uma.umtx_shm.keg.align |
item alignment mask |
vm.uma.umtx_shm.keg.domain |
|
vm.uma.umtx_shm.keg.domain.[num] |
|
vm.uma.umtx_shm.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.umtx_shm.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.umtx_shm.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.umtx_shm.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.umtx_shm.keg.ipers |
items available per-slab |
vm.uma.umtx_shm.keg.name |
Keg name |
vm.uma.umtx_shm.keg.ppera |
pages per-slab allocation |
vm.uma.umtx_shm.keg.reserve |
number of reserved items |
vm.uma.umtx_shm.keg.rsize |
Real object size with alignment |
vm.uma.umtx_shm.limit |
|
vm.uma.umtx_shm.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.umtx_shm.limit.items |
Current number of allocated items if limit is set |
vm.uma.umtx_shm.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.umtx_shm.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.umtx_shm.limit.sleeps |
Total zone limit sleeps |
vm.uma.umtx_shm.size |
Allocation size |
vm.uma.umtx_shm.stats |
|
vm.uma.umtx_shm.stats.allocs |
Total allocation calls |
vm.uma.umtx_shm.stats.current |
Current number of allocated items |
vm.uma.umtx_shm.stats.fails |
Number of allocation failures |
vm.uma.umtx_shm.stats.frees |
Total free calls |
vm.uma.umtx_shm.stats.xdomain |
Free calls from the wrong domain |
vm.uma.unpcb |
|
vm.uma.unpcb.bucket_size |
Desired per-cpu cache size |
vm.uma.unpcb.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.unpcb.domain |
|
vm.uma.unpcb.domain.[num] |
|
vm.uma.unpcb.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.unpcb.domain.[num].imax |
maximum item count in this period |
vm.uma.unpcb.domain.[num].imin |
minimum item count in this period |
vm.uma.unpcb.domain.[num].limin |
Long time minimum item count |
vm.uma.unpcb.domain.[num].nitems |
number of items in this domain |
vm.uma.unpcb.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.unpcb.domain.[num].wss |
Working set size |
vm.uma.unpcb.flags |
Allocator configuration flags |
vm.uma.unpcb.keg |
|
vm.uma.unpcb.keg.align |
item alignment mask |
vm.uma.unpcb.keg.domain |
|
vm.uma.unpcb.keg.domain.[num] |
|
vm.uma.unpcb.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.unpcb.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.unpcb.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.unpcb.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.unpcb.keg.ipers |
items available per-slab |
vm.uma.unpcb.keg.name |
Keg name |
vm.uma.unpcb.keg.ppera |
pages per-slab allocation |
vm.uma.unpcb.keg.reserve |
number of reserved items |
vm.uma.unpcb.keg.rsize |
Real object size with alignment |
vm.uma.unpcb.limit |
|
vm.uma.unpcb.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.unpcb.limit.items |
Current number of allocated items if limit is set |
vm.uma.unpcb.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.unpcb.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.unpcb.limit.sleeps |
Total zone limit sleeps |
vm.uma.unpcb.size |
Allocation size |
vm.uma.unpcb.stats |
|
vm.uma.unpcb.stats.allocs |
Total allocation calls |
vm.uma.unpcb.stats.current |
Current number of allocated items |
vm.uma.unpcb.stats.fails |
Number of allocation failures |
vm.uma.unpcb.stats.frees |
Total free calls |
vm.uma.unpcb.stats.xdomain |
Free calls from the wrong domain |
vm.uma.vm_pgcache |
|
vm.uma.vm_pgcache.bucket_size |
Desired per-cpu cache size |
vm.uma.vm_pgcache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.vm_pgcache.domain |
|
vm.uma.vm_pgcache.domain.[num] |
|
vm.uma.vm_pgcache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.vm_pgcache.domain.[num].imax |
maximum item count in this period |
vm.uma.vm_pgcache.domain.[num].imin |
minimum item count in this period |
vm.uma.vm_pgcache.domain.[num].limin |
Long time minimum item count |
vm.uma.vm_pgcache.domain.[num].nitems |
number of items in this domain |
vm.uma.vm_pgcache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.vm_pgcache.domain.[num].wss |
Working set size |
vm.uma.vm_pgcache.flags |
Allocator configuration flags |
vm.uma.vm_pgcache.keg |
|
vm.uma.vm_pgcache.keg.name |
Keg name |
vm.uma.vm_pgcache.limit |
|
vm.uma.vm_pgcache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.vm_pgcache.limit.items |
Current number of allocated items if limit is set |
vm.uma.vm_pgcache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.vm_pgcache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.vm_pgcache.limit.sleeps |
Total zone limit sleeps |
vm.uma.vm_pgcache.size |
Allocation size |
vm.uma.vm_pgcache.stats |
|
vm.uma.vm_pgcache.stats.allocs |
Total allocation calls |
vm.uma.vm_pgcache.stats.current |
Current number of allocated items |
vm.uma.vm_pgcache.stats.fails |
Number of allocation failures |
vm.uma.vm_pgcache.stats.frees |
Total free calls |
vm.uma.vm_pgcache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.vm_pgcache_[num] |
|
vm.uma.vm_pgcache_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.vm_pgcache_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.vm_pgcache_[num].domain |
|
vm.uma.vm_pgcache_[num].domain.[num] |
|
vm.uma.vm_pgcache_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.vm_pgcache_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.vm_pgcache_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.vm_pgcache_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.vm_pgcache_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.vm_pgcache_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.vm_pgcache_[num].domain.[num].wss |
Working set size |
vm.uma.vm_pgcache_[num].flags |
Allocator configuration flags |
vm.uma.vm_pgcache_[num].keg |
|
vm.uma.vm_pgcache_[num].keg.name |
Keg name |
vm.uma.vm_pgcache_[num].limit |
|
vm.uma.vm_pgcache_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.vm_pgcache_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.vm_pgcache_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.vm_pgcache_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.vm_pgcache_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.vm_pgcache_[num].size |
Allocation size |
vm.uma.vm_pgcache_[num].stats |
|
vm.uma.vm_pgcache_[num].stats.allocs |
Total allocation calls |
vm.uma.vm_pgcache_[num].stats.current |
Current number of allocated items |
vm.uma.vm_pgcache_[num].stats.fails |
Number of allocation failures |
vm.uma.vm_pgcache_[num].stats.frees |
Total free calls |
vm.uma.vm_pgcache_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.vmem |
|
vm.uma.vmem.bucket_size |
Desired per-cpu cache size |
vm.uma.vmem.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.vmem.domain |
|
vm.uma.vmem.domain.[num] |
|
vm.uma.vmem.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.vmem.domain.[num].imax |
maximum item count in this period |
vm.uma.vmem.domain.[num].imin |
minimum item count in this period |
vm.uma.vmem.domain.[num].limin |
Long time minimum item count |
vm.uma.vmem.domain.[num].nitems |
number of items in this domain |
vm.uma.vmem.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.vmem.domain.[num].wss |
Working set size |
vm.uma.vmem.flags |
Allocator configuration flags |
vm.uma.vmem.keg |
|
vm.uma.vmem.keg.align |
item alignment mask |
vm.uma.vmem.keg.domain |
|
vm.uma.vmem.keg.domain.[num] |
|
vm.uma.vmem.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.vmem.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.vmem.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.vmem.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.vmem.keg.ipers |
items available per-slab |
vm.uma.vmem.keg.name |
Keg name |
vm.uma.vmem.keg.ppera |
pages per-slab allocation |
vm.uma.vmem.keg.reserve |
number of reserved items |
vm.uma.vmem.keg.rsize |
Real object size with alignment |
vm.uma.vmem.limit |
|
vm.uma.vmem.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.vmem.limit.items |
Current number of allocated items if limit is set |
vm.uma.vmem.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.vmem.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.vmem.limit.sleeps |
Total zone limit sleeps |
vm.uma.vmem.size |
Allocation size |
vm.uma.vmem.stats |
|
vm.uma.vmem.stats.allocs |
Total allocation calls |
vm.uma.vmem.stats.current |
Current number of allocated items |
vm.uma.vmem.stats.fails |
Number of allocation failures |
vm.uma.vmem.stats.frees |
Total free calls |
vm.uma.vmem.stats.xdomain |
Free calls from the wrong domain |
vm.uma.vmem_btag |
|
vm.uma.vmem_btag.bucket_size |
Desired per-cpu cache size |
vm.uma.vmem_btag.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.vmem_btag.domain |
|
vm.uma.vmem_btag.domain.[num] |
|
vm.uma.vmem_btag.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.vmem_btag.domain.[num].imax |
maximum item count in this period |
vm.uma.vmem_btag.domain.[num].imin |
minimum item count in this period |
vm.uma.vmem_btag.domain.[num].limin |
Long time minimum item count |
vm.uma.vmem_btag.domain.[num].nitems |
number of items in this domain |
vm.uma.vmem_btag.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.vmem_btag.domain.[num].wss |
Working set size |
vm.uma.vmem_btag.flags |
Allocator configuration flags |
vm.uma.vmem_btag.keg |
|
vm.uma.vmem_btag.keg.align |
item alignment mask |
vm.uma.vmem_btag.keg.domain |
|
vm.uma.vmem_btag.keg.domain.[num] |
|
vm.uma.vmem_btag.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.vmem_btag.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.vmem_btag.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.vmem_btag.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.vmem_btag.keg.ipers |
items available per-slab |
vm.uma.vmem_btag.keg.name |
Keg name |
vm.uma.vmem_btag.keg.ppera |
pages per-slab allocation |
vm.uma.vmem_btag.keg.reserve |
number of reserved items |
vm.uma.vmem_btag.keg.rsize |
Real object size with alignment |
vm.uma.vmem_btag.limit |
|
vm.uma.vmem_btag.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.vmem_btag.limit.items |
Current number of allocated items if limit is set |
vm.uma.vmem_btag.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.vmem_btag.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.vmem_btag.limit.sleeps |
Total zone limit sleeps |
vm.uma.vmem_btag.size |
Allocation size |
vm.uma.vmem_btag.stats |
|
vm.uma.vmem_btag.stats.allocs |
Total allocation calls |
vm.uma.vmem_btag.stats.current |
Current number of allocated items |
vm.uma.vmem_btag.stats.fails |
Number of allocation failures |
vm.uma.vmem_btag.stats.frees |
Total free calls |
vm.uma.vmem_btag.stats.xdomain |
Free calls from the wrong domain |
vm.uma.vnpbuf |
|
vm.uma.vnpbuf.bucket_size |
Desired per-cpu cache size |
vm.uma.vnpbuf.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.vnpbuf.domain |
|
vm.uma.vnpbuf.domain.[num] |
|
vm.uma.vnpbuf.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.vnpbuf.domain.[num].imax |
maximum item count in this period |
vm.uma.vnpbuf.domain.[num].imin |
minimum item count in this period |
vm.uma.vnpbuf.domain.[num].limin |
Long time minimum item count |
vm.uma.vnpbuf.domain.[num].nitems |
number of items in this domain |
vm.uma.vnpbuf.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.vnpbuf.domain.[num].wss |
Working set size |
vm.uma.vnpbuf.flags |
Allocator configuration flags |
vm.uma.vnpbuf.keg |
|
vm.uma.vnpbuf.keg.align |
item alignment mask |
vm.uma.vnpbuf.keg.domain |
|
vm.uma.vnpbuf.keg.domain.[num] |
|
vm.uma.vnpbuf.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.vnpbuf.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.vnpbuf.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.vnpbuf.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.vnpbuf.keg.ipers |
items available per-slab |
vm.uma.vnpbuf.keg.name |
Keg name |
vm.uma.vnpbuf.keg.ppera |
pages per-slab allocation |
vm.uma.vnpbuf.keg.reserve |
number of reserved items |
vm.uma.vnpbuf.keg.rsize |
Real object size with alignment |
vm.uma.vnpbuf.limit |
|
vm.uma.vnpbuf.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.vnpbuf.limit.items |
Current number of allocated items if limit is set |
vm.uma.vnpbuf.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.vnpbuf.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.vnpbuf.limit.sleeps |
Total zone limit sleeps |
vm.uma.vnpbuf.size |
Allocation size |
vm.uma.vnpbuf.stats |
|
vm.uma.vnpbuf.stats.allocs |
Total allocation calls |
vm.uma.vnpbuf.stats.current |
Current number of allocated items |
vm.uma.vnpbuf.stats.fails |
Number of allocation failures |
vm.uma.vnpbuf.stats.frees |
Total free calls |
vm.uma.vnpbuf.stats.xdomain |
Free calls from the wrong domain |
vm.uma.vtnet_tx_hdr |
|
vm.uma.vtnet_tx_hdr.bucket_size |
Desired per-cpu cache size |
vm.uma.vtnet_tx_hdr.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.vtnet_tx_hdr.domain |
|
vm.uma.vtnet_tx_hdr.domain.[num] |
|
vm.uma.vtnet_tx_hdr.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.vtnet_tx_hdr.domain.[num].imax |
maximum item count in this period |
vm.uma.vtnet_tx_hdr.domain.[num].imin |
minimum item count in this period |
vm.uma.vtnet_tx_hdr.domain.[num].limin |
Long time minimum item count |
vm.uma.vtnet_tx_hdr.domain.[num].nitems |
number of items in this domain |
vm.uma.vtnet_tx_hdr.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.vtnet_tx_hdr.domain.[num].wss |
Working set size |
vm.uma.vtnet_tx_hdr.flags |
Allocator configuration flags |
vm.uma.vtnet_tx_hdr.keg |
|
vm.uma.vtnet_tx_hdr.keg.align |
item alignment mask |
vm.uma.vtnet_tx_hdr.keg.domain |
|
vm.uma.vtnet_tx_hdr.keg.domain.[num] |
|
vm.uma.vtnet_tx_hdr.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.vtnet_tx_hdr.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.vtnet_tx_hdr.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.vtnet_tx_hdr.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.vtnet_tx_hdr.keg.ipers |
items available per-slab |
vm.uma.vtnet_tx_hdr.keg.name |
Keg name |
vm.uma.vtnet_tx_hdr.keg.ppera |
pages per-slab allocation |
vm.uma.vtnet_tx_hdr.keg.reserve |
number of reserved items |
vm.uma.vtnet_tx_hdr.keg.rsize |
Real object size with alignment |
vm.uma.vtnet_tx_hdr.limit |
|
vm.uma.vtnet_tx_hdr.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.vtnet_tx_hdr.limit.items |
Current number of allocated items if limit is set |
vm.uma.vtnet_tx_hdr.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.vtnet_tx_hdr.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.vtnet_tx_hdr.limit.sleeps |
Total zone limit sleeps |
vm.uma.vtnet_tx_hdr.size |
Allocation size |
vm.uma.vtnet_tx_hdr.stats |
|
vm.uma.vtnet_tx_hdr.stats.allocs |
Total allocation calls |
vm.uma.vtnet_tx_hdr.stats.current |
Current number of allocated items |
vm.uma.vtnet_tx_hdr.stats.fails |
Number of allocation failures |
vm.uma.vtnet_tx_hdr.stats.frees |
Total free calls |
vm.uma.vtnet_tx_hdr.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zap_attr_cache |
|
vm.uma.zap_attr_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.zap_attr_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zap_attr_cache.domain |
|
vm.uma.zap_attr_cache.domain.[num] |
|
vm.uma.zap_attr_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zap_attr_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.zap_attr_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.zap_attr_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.zap_attr_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.zap_attr_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zap_attr_cache.domain.[num].wss |
Working set size |
vm.uma.zap_attr_cache.flags |
Allocator configuration flags |
vm.uma.zap_attr_cache.keg |
|
vm.uma.zap_attr_cache.keg.align |
item alignment mask |
vm.uma.zap_attr_cache.keg.domain |
|
vm.uma.zap_attr_cache.keg.domain.[num] |
|
vm.uma.zap_attr_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zap_attr_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zap_attr_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zap_attr_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zap_attr_cache.keg.ipers |
items available per-slab |
vm.uma.zap_attr_cache.keg.name |
Keg name |
vm.uma.zap_attr_cache.keg.ppera |
pages per-slab allocation |
vm.uma.zap_attr_cache.keg.reserve |
number of reserved items |
vm.uma.zap_attr_cache.keg.rsize |
Real object size with alignment |
vm.uma.zap_attr_cache.limit |
|
vm.uma.zap_attr_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zap_attr_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.zap_attr_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zap_attr_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zap_attr_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.zap_attr_cache.size |
Allocation size |
vm.uma.zap_attr_cache.stats |
|
vm.uma.zap_attr_cache.stats.allocs |
Total allocation calls |
vm.uma.zap_attr_cache.stats.current |
Current number of allocated items |
vm.uma.zap_attr_cache.stats.fails |
Number of allocation failures |
vm.uma.zap_attr_cache.stats.frees |
Total free calls |
vm.uma.zap_attr_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zap_attr_long_cache |
|
vm.uma.zap_attr_long_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.zap_attr_long_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zap_attr_long_cache.domain |
|
vm.uma.zap_attr_long_cache.domain.[num] |
|
vm.uma.zap_attr_long_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zap_attr_long_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.zap_attr_long_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.zap_attr_long_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.zap_attr_long_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.zap_attr_long_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zap_attr_long_cache.domain.[num].wss |
Working set size |
vm.uma.zap_attr_long_cache.flags |
Allocator configuration flags |
vm.uma.zap_attr_long_cache.keg |
|
vm.uma.zap_attr_long_cache.keg.align |
item alignment mask |
vm.uma.zap_attr_long_cache.keg.domain |
|
vm.uma.zap_attr_long_cache.keg.domain.[num] |
|
vm.uma.zap_attr_long_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zap_attr_long_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zap_attr_long_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zap_attr_long_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zap_attr_long_cache.keg.ipers |
items available per-slab |
vm.uma.zap_attr_long_cache.keg.name |
Keg name |
vm.uma.zap_attr_long_cache.keg.ppera |
pages per-slab allocation |
vm.uma.zap_attr_long_cache.keg.reserve |
number of reserved items |
vm.uma.zap_attr_long_cache.keg.rsize |
Real object size with alignment |
vm.uma.zap_attr_long_cache.limit |
|
vm.uma.zap_attr_long_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zap_attr_long_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.zap_attr_long_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zap_attr_long_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zap_attr_long_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.zap_attr_long_cache.size |
Allocation size |
vm.uma.zap_attr_long_cache.stats |
|
vm.uma.zap_attr_long_cache.stats.allocs |
Total allocation calls |
vm.uma.zap_attr_long_cache.stats.current |
Current number of allocated items |
vm.uma.zap_attr_long_cache.stats.fails |
Number of allocation failures |
vm.uma.zap_attr_long_cache.stats.frees |
Total free calls |
vm.uma.zap_attr_long_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zap_name |
|
vm.uma.zap_name.bucket_size |
Desired per-cpu cache size |
vm.uma.zap_name.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zap_name.domain |
|
vm.uma.zap_name.domain.[num] |
|
vm.uma.zap_name.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zap_name.domain.[num].imax |
maximum item count in this period |
vm.uma.zap_name.domain.[num].imin |
minimum item count in this period |
vm.uma.zap_name.domain.[num].limin |
Long time minimum item count |
vm.uma.zap_name.domain.[num].nitems |
number of items in this domain |
vm.uma.zap_name.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zap_name.domain.[num].wss |
Working set size |
vm.uma.zap_name.flags |
Allocator configuration flags |
vm.uma.zap_name.keg |
|
vm.uma.zap_name.keg.align |
item alignment mask |
vm.uma.zap_name.keg.domain |
|
vm.uma.zap_name.keg.domain.[num] |
|
vm.uma.zap_name.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zap_name.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zap_name.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zap_name.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zap_name.keg.ipers |
items available per-slab |
vm.uma.zap_name.keg.name |
Keg name |
vm.uma.zap_name.keg.ppera |
pages per-slab allocation |
vm.uma.zap_name.keg.reserve |
number of reserved items |
vm.uma.zap_name.keg.rsize |
Real object size with alignment |
vm.uma.zap_name.limit |
|
vm.uma.zap_name.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zap_name.limit.items |
Current number of allocated items if limit is set |
vm.uma.zap_name.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zap_name.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zap_name.limit.sleeps |
Total zone limit sleeps |
vm.uma.zap_name.size |
Allocation size |
vm.uma.zap_name.stats |
|
vm.uma.zap_name.stats.allocs |
Total allocation calls |
vm.uma.zap_name.stats.current |
Current number of allocated items |
vm.uma.zap_name.stats.fails |
Number of allocation failures |
vm.uma.zap_name.stats.frees |
Total free calls |
vm.uma.zap_name.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zap_name_long |
|
vm.uma.zap_name_long.bucket_size |
Desired per-cpu cache size |
vm.uma.zap_name_long.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zap_name_long.domain |
|
vm.uma.zap_name_long.domain.[num] |
|
vm.uma.zap_name_long.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zap_name_long.domain.[num].imax |
maximum item count in this period |
vm.uma.zap_name_long.domain.[num].imin |
minimum item count in this period |
vm.uma.zap_name_long.domain.[num].limin |
Long time minimum item count |
vm.uma.zap_name_long.domain.[num].nitems |
number of items in this domain |
vm.uma.zap_name_long.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zap_name_long.domain.[num].wss |
Working set size |
vm.uma.zap_name_long.flags |
Allocator configuration flags |
vm.uma.zap_name_long.keg |
|
vm.uma.zap_name_long.keg.align |
item alignment mask |
vm.uma.zap_name_long.keg.domain |
|
vm.uma.zap_name_long.keg.domain.[num] |
|
vm.uma.zap_name_long.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zap_name_long.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zap_name_long.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zap_name_long.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zap_name_long.keg.ipers |
items available per-slab |
vm.uma.zap_name_long.keg.name |
Keg name |
vm.uma.zap_name_long.keg.ppera |
pages per-slab allocation |
vm.uma.zap_name_long.keg.reserve |
number of reserved items |
vm.uma.zap_name_long.keg.rsize |
Real object size with alignment |
vm.uma.zap_name_long.limit |
|
vm.uma.zap_name_long.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zap_name_long.limit.items |
Current number of allocated items if limit is set |
vm.uma.zap_name_long.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zap_name_long.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zap_name_long.limit.sleeps |
Total zone limit sleeps |
vm.uma.zap_name_long.size |
Allocation size |
vm.uma.zap_name_long.stats |
|
vm.uma.zap_name_long.stats.allocs |
Total allocation calls |
vm.uma.zap_name_long.stats.current |
Current number of allocated items |
vm.uma.zap_name_long.stats.fails |
Number of allocation failures |
vm.uma.zap_name_long.stats.frees |
Total free calls |
vm.uma.zap_name_long.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zfs_btree_leaf_cache |
|
vm.uma.zfs_btree_leaf_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.zfs_btree_leaf_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zfs_btree_leaf_cache.domain |
|
vm.uma.zfs_btree_leaf_cache.domain.[num] |
|
vm.uma.zfs_btree_leaf_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zfs_btree_leaf_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.zfs_btree_leaf_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.zfs_btree_leaf_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.zfs_btree_leaf_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.zfs_btree_leaf_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zfs_btree_leaf_cache.domain.[num].wss |
Working set size |
vm.uma.zfs_btree_leaf_cache.flags |
Allocator configuration flags |
vm.uma.zfs_btree_leaf_cache.keg |
|
vm.uma.zfs_btree_leaf_cache.keg.align |
item alignment mask |
vm.uma.zfs_btree_leaf_cache.keg.domain |
|
vm.uma.zfs_btree_leaf_cache.keg.domain.[num] |
|
vm.uma.zfs_btree_leaf_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zfs_btree_leaf_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zfs_btree_leaf_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zfs_btree_leaf_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zfs_btree_leaf_cache.keg.ipers |
items available per-slab |
vm.uma.zfs_btree_leaf_cache.keg.name |
Keg name |
vm.uma.zfs_btree_leaf_cache.keg.ppera |
pages per-slab allocation |
vm.uma.zfs_btree_leaf_cache.keg.reserve |
number of reserved items |
vm.uma.zfs_btree_leaf_cache.keg.rsize |
Real object size with alignment |
vm.uma.zfs_btree_leaf_cache.limit |
|
vm.uma.zfs_btree_leaf_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zfs_btree_leaf_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.zfs_btree_leaf_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zfs_btree_leaf_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zfs_btree_leaf_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.zfs_btree_leaf_cache.size |
Allocation size |
vm.uma.zfs_btree_leaf_cache.stats |
|
vm.uma.zfs_btree_leaf_cache.stats.allocs |
Total allocation calls |
vm.uma.zfs_btree_leaf_cache.stats.current |
Current number of allocated items |
vm.uma.zfs_btree_leaf_cache.stats.fails |
Number of allocation failures |
vm.uma.zfs_btree_leaf_cache.stats.frees |
Total free calls |
vm.uma.zfs_btree_leaf_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zfs_znode_cache |
|
vm.uma.zfs_znode_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.zfs_znode_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zfs_znode_cache.domain |
|
vm.uma.zfs_znode_cache.domain.[num] |
|
vm.uma.zfs_znode_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zfs_znode_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.zfs_znode_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.zfs_znode_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.zfs_znode_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.zfs_znode_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zfs_znode_cache.domain.[num].wss |
Working set size |
vm.uma.zfs_znode_cache.flags |
Allocator configuration flags |
vm.uma.zfs_znode_cache.keg |
|
vm.uma.zfs_znode_cache.keg.align |
item alignment mask |
vm.uma.zfs_znode_cache.keg.domain |
|
vm.uma.zfs_znode_cache.keg.domain.[num] |
|
vm.uma.zfs_znode_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zfs_znode_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zfs_znode_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zfs_znode_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zfs_znode_cache.keg.ipers |
items available per-slab |
vm.uma.zfs_znode_cache.keg.name |
Keg name |
vm.uma.zfs_znode_cache.keg.ppera |
pages per-slab allocation |
vm.uma.zfs_znode_cache.keg.reserve |
number of reserved items |
vm.uma.zfs_znode_cache.keg.rsize |
Real object size with alignment |
vm.uma.zfs_znode_cache.limit |
|
vm.uma.zfs_znode_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zfs_znode_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.zfs_znode_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zfs_znode_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zfs_znode_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.zfs_znode_cache.size |
Allocation size |
vm.uma.zfs_znode_cache.stats |
|
vm.uma.zfs_znode_cache.stats.allocs |
Total allocation calls |
vm.uma.zfs_znode_cache.stats.current |
Current number of allocated items |
vm.uma.zfs_znode_cache.stats.fails |
Number of allocation failures |
vm.uma.zfs_znode_cache.stats.frees |
Total free calls |
vm.uma.zfs_znode_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zil_lwb_cache |
|
vm.uma.zil_lwb_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.zil_lwb_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zil_lwb_cache.domain |
|
vm.uma.zil_lwb_cache.domain.[num] |
|
vm.uma.zil_lwb_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zil_lwb_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.zil_lwb_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.zil_lwb_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.zil_lwb_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.zil_lwb_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zil_lwb_cache.domain.[num].wss |
Working set size |
vm.uma.zil_lwb_cache.flags |
Allocator configuration flags |
vm.uma.zil_lwb_cache.keg |
|
vm.uma.zil_lwb_cache.keg.align |
item alignment mask |
vm.uma.zil_lwb_cache.keg.domain |
|
vm.uma.zil_lwb_cache.keg.domain.[num] |
|
vm.uma.zil_lwb_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zil_lwb_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zil_lwb_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zil_lwb_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zil_lwb_cache.keg.ipers |
items available per-slab |
vm.uma.zil_lwb_cache.keg.name |
Keg name |
vm.uma.zil_lwb_cache.keg.ppera |
pages per-slab allocation |
vm.uma.zil_lwb_cache.keg.reserve |
number of reserved items |
vm.uma.zil_lwb_cache.keg.rsize |
Real object size with alignment |
vm.uma.zil_lwb_cache.limit |
|
vm.uma.zil_lwb_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zil_lwb_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.zil_lwb_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zil_lwb_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zil_lwb_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.zil_lwb_cache.size |
Allocation size |
vm.uma.zil_lwb_cache.stats |
|
vm.uma.zil_lwb_cache.stats.allocs |
Total allocation calls |
vm.uma.zil_lwb_cache.stats.current |
Current number of allocated items |
vm.uma.zil_lwb_cache.stats.fails |
Number of allocation failures |
vm.uma.zil_lwb_cache.stats.frees |
Total free calls |
vm.uma.zil_lwb_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zil_zcw_cache |
|
vm.uma.zil_zcw_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.zil_zcw_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zil_zcw_cache.domain |
|
vm.uma.zil_zcw_cache.domain.[num] |
|
vm.uma.zil_zcw_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zil_zcw_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.zil_zcw_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.zil_zcw_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.zil_zcw_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.zil_zcw_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zil_zcw_cache.domain.[num].wss |
Working set size |
vm.uma.zil_zcw_cache.flags |
Allocator configuration flags |
vm.uma.zil_zcw_cache.keg |
|
vm.uma.zil_zcw_cache.keg.align |
item alignment mask |
vm.uma.zil_zcw_cache.keg.domain |
|
vm.uma.zil_zcw_cache.keg.domain.[num] |
|
vm.uma.zil_zcw_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zil_zcw_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zil_zcw_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zil_zcw_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zil_zcw_cache.keg.ipers |
items available per-slab |
vm.uma.zil_zcw_cache.keg.name |
Keg name |
vm.uma.zil_zcw_cache.keg.ppera |
pages per-slab allocation |
vm.uma.zil_zcw_cache.keg.reserve |
number of reserved items |
vm.uma.zil_zcw_cache.keg.rsize |
Real object size with alignment |
vm.uma.zil_zcw_cache.limit |
|
vm.uma.zil_zcw_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zil_zcw_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.zil_zcw_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zil_zcw_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zil_zcw_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.zil_zcw_cache.size |
Allocation size |
vm.uma.zil_zcw_cache.stats |
|
vm.uma.zil_zcw_cache.stats.allocs |
Total allocation calls |
vm.uma.zil_zcw_cache.stats.current |
Current number of allocated items |
vm.uma.zil_zcw_cache.stats.fails |
Number of allocation failures |
vm.uma.zil_zcw_cache.stats.frees |
Total free calls |
vm.uma.zil_zcw_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zio_buf_comb_[num] |
|
vm.uma.zio_buf_comb_[num].bucket_size |
Desired per-cpu cache size |
vm.uma.zio_buf_comb_[num].bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zio_buf_comb_[num].domain |
|
vm.uma.zio_buf_comb_[num].domain.[num] |
|
vm.uma.zio_buf_comb_[num].domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zio_buf_comb_[num].domain.[num].imax |
maximum item count in this period |
vm.uma.zio_buf_comb_[num].domain.[num].imin |
minimum item count in this period |
vm.uma.zio_buf_comb_[num].domain.[num].limin |
Long time minimum item count |
vm.uma.zio_buf_comb_[num].domain.[num].nitems |
number of items in this domain |
vm.uma.zio_buf_comb_[num].domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zio_buf_comb_[num].domain.[num].wss |
Working set size |
vm.uma.zio_buf_comb_[num].flags |
Allocator configuration flags |
vm.uma.zio_buf_comb_[num].keg |
|
vm.uma.zio_buf_comb_[num].keg.align |
item alignment mask |
vm.uma.zio_buf_comb_[num].keg.domain |
|
vm.uma.zio_buf_comb_[num].keg.domain.[num] |
|
vm.uma.zio_buf_comb_[num].keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zio_buf_comb_[num].keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zio_buf_comb_[num].keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zio_buf_comb_[num].keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zio_buf_comb_[num].keg.ipers |
items available per-slab |
vm.uma.zio_buf_comb_[num].keg.name |
Keg name |
vm.uma.zio_buf_comb_[num].keg.ppera |
pages per-slab allocation |
vm.uma.zio_buf_comb_[num].keg.reserve |
number of reserved items |
vm.uma.zio_buf_comb_[num].keg.rsize |
Real object size with alignment |
vm.uma.zio_buf_comb_[num].limit |
|
vm.uma.zio_buf_comb_[num].limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zio_buf_comb_[num].limit.items |
Current number of allocated items if limit is set |
vm.uma.zio_buf_comb_[num].limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zio_buf_comb_[num].limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zio_buf_comb_[num].limit.sleeps |
Total zone limit sleeps |
vm.uma.zio_buf_comb_[num].size |
Allocation size |
vm.uma.zio_buf_comb_[num].stats |
|
vm.uma.zio_buf_comb_[num].stats.allocs |
Total allocation calls |
vm.uma.zio_buf_comb_[num].stats.current |
Current number of allocated items |
vm.uma.zio_buf_comb_[num].stats.fails |
Number of allocation failures |
vm.uma.zio_buf_comb_[num].stats.frees |
Total free calls |
vm.uma.zio_buf_comb_[num].stats.xdomain |
Free calls from the wrong domain |
vm.uma.zio_cache |
|
vm.uma.zio_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.zio_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zio_cache.domain |
|
vm.uma.zio_cache.domain.[num] |
|
vm.uma.zio_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zio_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.zio_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.zio_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.zio_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.zio_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zio_cache.domain.[num].wss |
Working set size |
vm.uma.zio_cache.flags |
Allocator configuration flags |
vm.uma.zio_cache.keg |
|
vm.uma.zio_cache.keg.align |
item alignment mask |
vm.uma.zio_cache.keg.domain |
|
vm.uma.zio_cache.keg.domain.[num] |
|
vm.uma.zio_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zio_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zio_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zio_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zio_cache.keg.ipers |
items available per-slab |
vm.uma.zio_cache.keg.name |
Keg name |
vm.uma.zio_cache.keg.ppera |
pages per-slab allocation |
vm.uma.zio_cache.keg.reserve |
number of reserved items |
vm.uma.zio_cache.keg.rsize |
Real object size with alignment |
vm.uma.zio_cache.limit |
|
vm.uma.zio_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zio_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.zio_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zio_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zio_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.zio_cache.size |
Allocation size |
vm.uma.zio_cache.stats |
|
vm.uma.zio_cache.stats.allocs |
Total allocation calls |
vm.uma.zio_cache.stats.current |
Current number of allocated items |
vm.uma.zio_cache.stats.fails |
Number of allocation failures |
vm.uma.zio_cache.stats.frees |
Total free calls |
vm.uma.zio_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma.zio_link_cache |
|
vm.uma.zio_link_cache.bucket_size |
Desired per-cpu cache size |
vm.uma.zio_link_cache.bucket_size_max |
Maximum allowed per-cpu cache size |
vm.uma.zio_link_cache.domain |
|
vm.uma.zio_link_cache.domain.[num] |
|
vm.uma.zio_link_cache.domain.[num].bimin |
Minimum item count in this batch |
vm.uma.zio_link_cache.domain.[num].imax |
maximum item count in this period |
vm.uma.zio_link_cache.domain.[num].imin |
minimum item count in this period |
vm.uma.zio_link_cache.domain.[num].limin |
Long time minimum item count |
vm.uma.zio_link_cache.domain.[num].nitems |
number of items in this domain |
vm.uma.zio_link_cache.domain.[num].timin |
Time since zero long time minimum item count |
vm.uma.zio_link_cache.domain.[num].wss |
Working set size |
vm.uma.zio_link_cache.flags |
Allocator configuration flags |
vm.uma.zio_link_cache.keg |
|
vm.uma.zio_link_cache.keg.align |
item alignment mask |
vm.uma.zio_link_cache.keg.domain |
|
vm.uma.zio_link_cache.keg.domain.[num] |
|
vm.uma.zio_link_cache.keg.domain.[num].free_items |
Items free in the slab layer |
vm.uma.zio_link_cache.keg.domain.[num].free_slabs |
Unused slabs |
vm.uma.zio_link_cache.keg.domain.[num].pages |
Total pages currently allocated from VM |
vm.uma.zio_link_cache.keg.efficiency |
Slab utilization (100 - internal fragmentation %) |
vm.uma.zio_link_cache.keg.ipers |
items available per-slab |
vm.uma.zio_link_cache.keg.name |
Keg name |
vm.uma.zio_link_cache.keg.ppera |
pages per-slab allocation |
vm.uma.zio_link_cache.keg.reserve |
number of reserved items |
vm.uma.zio_link_cache.keg.rsize |
Real object size with alignment |
vm.uma.zio_link_cache.limit |
|
vm.uma.zio_link_cache.limit.bucket_max |
Maximum number of items in each domain's bucket cache |
vm.uma.zio_link_cache.limit.items |
Current number of allocated items if limit is set |
vm.uma.zio_link_cache.limit.max_items |
Maximum number of allocated and cached items |
vm.uma.zio_link_cache.limit.sleepers |
Number of threads sleeping at limit |
vm.uma.zio_link_cache.limit.sleeps |
Total zone limit sleeps |
vm.uma.zio_link_cache.size |
Allocation size |
vm.uma.zio_link_cache.stats |
|
vm.uma.zio_link_cache.stats.allocs |
Total allocation calls |
vm.uma.zio_link_cache.stats.current |
Current number of allocated items |
vm.uma.zio_link_cache.stats.fails |
Number of allocation failures |
vm.uma.zio_link_cache.stats.frees |
Total free calls |
vm.uma.zio_link_cache.stats.xdomain |
Free calls from the wrong domain |
vm.uma_kmem_limit |
UMA kernel memory soft limit |
vm.uma_kmem_total |
UMA kernel memory usage |
vm.v_free_min |
Minimum low-free-pages threshold |
vm.v_free_reserved |
Pages reserved for deadlock |
vm.v_free_severe |
Severe page depletion point |
vm.v_free_target |
Desired free pages |
vm.v_inactive_target |
Pages desired inactive |
vm.v_pageout_free_min |
Min pages reserved for kernel |
vm.vmdaemon_timeout |
Time between vmdaemon runs |
vm.vmtotal |
System virtual memory statistics |
vm.vnode_pbufs |
number of physical buffers allocated for vnode pager |
vm.zone_count |
Number of UMA zones |
vm.zone_stats |
Zone Stats |
vm.zone_warnings |
Warn when UMA zones becomes full |