Friday, 9 September 2016

What is Scavenging?

Scavenging is a feature that allows the cleanup and removal of stale resource records in DNS zones.

DNS port number

DNS port number - 53

Thursday, 25 August 2016

List of DNS record types

List of DNS record types

A  -Address record  
Returns a 32-bit IPv4 address, most commonly used to map hostnames to an IP address of the host,

MX - Mail exchange record Maps a domain name to a list of message transfer agents for that domain

PTR -  Pointer record-
Pointer to a canonical name. Unlike a CNAME, DNS processing stops and just the name is returned. The most common use is for implementing reverse DNS lookups, but other uses include such things as DNS-SD.

SOA - Start of [a zone of] authority record -
Specifies authoritative information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number, and several timers relating to refreshing the zone.
SRV - Service locator
Generalized service location record, used for newer protocols instead of creating protocol-specific records such as MX.

Saturday, 9 July 2016

Page File and Memory dump in 64-bit versions of Windows

How to determine the appropriate page file size for 64-bit versions of Windows



Summary
A page file (also known as a "paging file") is an optional, hidden system file on a hard disk. The page file can be used to "back" (or support) system crash dumps and extend how much system-committed memory (also known as “virtual memory”) a system can back. It also enables the system to remove infrequently accessed modified pages from physical memory to let the system use physical memory more efficiently for more frequently accessed pages.

64-bit versions of Windows and Windows Server support more physical memory (RAM) than 32-bit versions support. However, the reason to configure the page file size has not changed. It has always been about supporting a system crash dump, if it is necessary, or extending the system commit limit, if it is necessary. For example, when a lot of physical memory is installed, a page file might not be required to back the system commit charge during peak usage. The available physical memory alone might be large enough to do this. However, a page file or a dedicated dump file might still be required to back a system crash dump.

Use the following considerations for page file sizing for all versions of Windows and Windows Server:
  • Crash dump setting: If you want a crash dump file to be created during a system crash, a page file or a dedicated dump file must exist and be large enough to back the system crash dump setting. Otherwise, a system memory dump file is not created.
  • Peak system commit charge: The system commit charge cannot exceed the system commit limit. This limit is the sum of physical memory (RAM) and all page files combined. If no page files exist, the system commit limit is slightly less than the physical memory installed. Peak system-committed memory usage can vary greatly between systems. Therefore, physical memory and page file sizing also varies.
  • Quantity of infrequently accessed pages: The purpose of a page file is to back infrequently accessed modified pages so that they can be removed from physical memory. This provides more available space for more frequently accessed pages. The "\Memory\Modified Page List Bytes" performance counter measures, in part, the number of infrequently accessed modified pages that are destined for the hard disk. However, be aware that not all the memory on the modified page list is written out to disk. Typically, several hundred megabytes of memory remains resident on the modified list. Therefore, consider extending or adding a page file if all the following conditions are true:
    • More available physical memory (\Memory\Available MBytes) is required.
    • The modified page list contains a significant amount of memory.
    • The existing page files are fairly full (\Paging Files(*)\% Usage).

    Notes
    • Some products or services may require a page file for reasons other than those discussed here. For more information, check your product documentation. For example, Windows Server domain controllers and DFS replication, certificate, and LDS servers (also Client editions) are not supported without a configured page file. The algorithm of the database cache for ESENT (ESE, in Microsoft Exchange Server) depends on the "\Memory\Transition Pages RePurposed/sec" performance monitor counter. A page file is required to make sure that the database cache can release memory if memory is requested by other services or applications. In summary, page file sizing depends on the system crash dump setting requirements and the system commit charge peak usage or expected usage. Both considerations are unique to each system, even for systems that are identical to other systems. This means that page file sizing is unique to each system and cannot be generalized.
    • For Windows Server 2012 Hyper-V and Windows Server 2012 R2 Hyper-V, the page file of the management OS (commonly called the host OS) should be left at the default of setting of "System Managed." This is per the Hyper-V product group.
More information

System committed memory

The system commit limit is the sum of physical memory and all page files combined. It represents the maximum system-committed memory (known as the “system commit charge”) that the system can back. The system commit charge is the total committed or “promised” memory of all committed virtual memory in the system. If the system commit charge reaches the system commit limit, the system and processes might not obtain committed memory. This condition can cause hangs, crashes, and other malfunctions. Therefore, make sure that you set the system commit limit large enough to back the system commit charge during peak usage.

The system commit charge and system commit limit can be measured on the Performance tab in Task Manager or by using "\Memory\Committed Bytes" and "\Memory\Commit Limit" performance counters. The "\Memory\% Committed Bytes In Use" counter is a ratio of the "\Memory\Committed Bytes" to "\Memory\Commit Limit" values.

Note System-managed page files automatically grow up to three times physical memory or 4 GB (whichever is larger) when the system commit charge reaches 90 percent of the system commit limit. This assumes that enough free disk space is available to accommodate the growth.

System crash dumps

A system crash (also known as a “bug check” or a "Stop error") occurs when the system cannot run correctly. The dump file that is produced from this event is called a system crash dump. A page file or dedicated dump file is used to write a crash dump file (memory.dmp) to disk. Therefore, a page file or a dedicated dump file must be large enough to back the kind of crash dump selected. Otherwise, the system cannot create the crash dump file.

Note During startup, system-managed page files are sized respective to the system crash dump settings. This assumes that enough free disk space exists.
System crash dump settingMinimum page file size requirement
Small memory dump (256 KB)1 MB
Kernel memory dumpDepends on kernel virtual memory usage
Complete memory dump1 x RAM plus 257 MB*
Automatic memory dumpDepends on kernel virtual memory usage. For details, see Automatic memory dump on MSDN.

* 1 MB of header data and device drivers can total 256 MB of secondary crash dump data.

Automatic memory dump

Windows 8 and Windows Server 2012 introduced the “Automatic memory dump” feature. This feature is enabled by default. This is a new setting, not a new kind of crash dump. This setting automatically selects the best page file size, depending on the frequency of system crashes.

The Automatic memory dump setting at first selects a small paging file size, would accommodate the kernel memory most of the time. If the system crashes again within four weeks, the Automatic memory dump feature selects the page file size as either the RAM size or 32 GB, whichever is smaller.

Note In Windows 8.1 and Windows Server 2012 R2, the initial minimum size of the page file or the dedicated dump file is 1 GB.

Kernel memory crash dumps require enough page file space or dedicated dump file space to accommodate the kernel mode side of virtual memory usage. If the system crashes again within four weeks of the previous crash, aComplete memory dump is selected at restart. This requires a page file or dedicated dump file of at least the size of physical memory (RAM) plus 1 MB for header information plus 256 MB for potential driver data to support all the potential data that is dumped from memory. Again, the system-managed page file will be increased to back this kind of crash dump. If the system is configured to have a page file or a dedicated dump file of a specific size, make sure that the size is sufficient to back the crash dump setting that is listed in the table earlier in this section together with and the peak system commit charge.

For more information about system crash dumps, click the following article number to go to the article in the Microsoft Knowledge Base:

969028 How to generate a kernel or a complete memory dump file in Windows Server 2008 and Windows Server 2008 R2

Dedicated dump files

Computers that are running Microsoft Windows or Microsoft Windows Server usually must have a page file to back a system crash dump. System administrators now have the option to create a dedicated dump file instead by using the following software packages to start with:
  • Windows 7 Service Pack 1 with hotfix 2716542 applied
  • Windows Server 2008 R2 Service Pack 1 with hotfix 2716542 applied
A dedicated dump file is a page file that is not used for paging. Instead, it is “dedicated” to back a system crash dump file (memory.dmp) when a system crash occurs. Dedicated dump files can be put on any disk volume that can support a page file. We recommend that you use a dedicated dump file when you want a system crash dump but you do not want a page file.

For more information about dedicated dump files, click the following article numbers to go to the articles in the Microsoft Knowledge Base:

969028 How to generate a kernel or a complete memory dump file in Windows Server 2008 and Windows Server 2008 R2

950858 Dedicated dump files are unexpectedly truncated to 4 GB on a computer that is running Windows Server 2008 or Windows Vista and that has more than 4 GB of physical memory

System-managed page files

By default, page files are system-managed. This means that the page files increase and decrease based on many factors, such as the amount of physical memory installed, the process of accommodating the system commit charge, and the process of accommodating a system crash dump.

For example, when the system commit charge is more than 90 percent of the system commit limit, the page file is increased to back it. This continues to occur until the page file reaches three times the size of physical memory or 4 GB, whichever is larger. This all assumes that the logical disk that is hosting the page file is large enough to accommodate the growth.

The following table lists the minimum and maximum page file sizes of system-managed page files.
Operating systemMinimum page file sizeMaximum page file size
Windows XP and Windows Server 2003 with less than 1 GB of RAM1.5 x RAM3 x RAM or 4 GB, whichever is larger
Windows XP and Windows Server 2003 with more than 1 GB of RAM1 x RAM3 x RAM or 4 GB, whichever is larger
Windows Vista and Windows Server 20081 x RAM3 x RAM or 4 GB, whichever is larger
Windows 7 and Windows Server 2008 R21 x RAM3 x RAM or 4 GB, whichever is larger
Windows 8 and Windows Server 2012Depends on crash dump setting*3 x RAM or 4 GB, whichever is larger
Windows 8.1 and Windows Server 2012 R2Depends on crash dump setting*3 x RAM or 4 GB, whichever is larger

* See system crash dumps.

Performance counters

Several performance counters are related to page files. This section describes the counters and what they measure.
\Memory\Page/sec and other hard page fault counters
The following performance counters measure hard page faults (which include, but are not limited to, page file reads):
  • \Memory\Page/sec
  • \Memory\Page Reads/sec
  • \Memory\Page Inputs/sec
The following performance counters measure page file writes:
  • \Memory\Page Writes/sec
  • \Memory\Page Output/sec
Hard page faults are faults that must be resolved by retrieving the data from disk. Such data can include portions of DLLs, .exe files, memory-mapped files, and page files. These faults might or might not be related to a page file or to a low-memory condition. Hard page faults are a standard function of the operating system. They occur when the following items are read:
  • Parts of image files (.dll and .exe files) as they are used
  • Memory-mapped files
  • A page file
High values for these counters (excessive paging) indicate disk access of generally 4 KB per page fault on x86 and x64 versions of Windows and Windows Server. This disk access might or might not be related to page file activity but may contribute to poor disk performance that can cause system-wide delays if the related disks are overwhelmed.

Therefore, we recommend that you monitor the disk performance of the logical disks that host a page file in correlation with these counters. Be aware that a system that has a sustained 100 hard page faults per second experiences 400 KB per second disk transfers. Most 7200 RPM disk drives can handle about 5 MB per second at an IO size of 16 KB or 800 KB per second at an IO size of 4 KB. No performance counter directly measures which logical disk the hard page faults are resolved for.
\Paging File(*)\% Usage
The \Paging File(*)\% Usage performance counter measures the percentage of usage of each page file. 100 percent usage of a page file does not indicate a performance problem as long as the system commit limit is not reached by the system commit charge, and if a significant amount of memory is not waiting to be written to a page file.

Note The size of the Modified Page List (\Memory\Modified Page List Bytes) is the total of modified data that is waiting to be written to disk.

If the Modified Page List (a list of physical memory pages that are the least frequently accessed) contains a lot of memory, and if the % Usage value of all page files is greater than 90, you can make more physical memory available for more frequently access pages by increasing or adding a page file.

Note Not all the memory on the modified page list is written out to disk. Typically, several hundred megabytes of memory remains resident on the modified list.

Multiple page files and disk considerations

If a system is configured to have more than one page file, the page file that responds first is the one that is used. This means that page files that are on faster disks are used more frequently. Also, putting a page file on a “fast” or “slow” disk is important only if the page file is frequently accessed and if the disk that is hosting the respective page file is overwhelmed. Be aware that actual page file usage depends greatly on the amount of modified memory that the system is managing. This means that files that already exist on disk (such as .txt, .doc, .dll, and .exe) are not written to a page file. Only modified data that does not already exist on disk (for example, unsaved text in Notepad ) is memory that could potentially be backed by a page file. After the unsaved data is saved to disk as a file, it is backed by the disk and not by a page file.

Saturday, 18 June 2016

Memory dump for BSOD trouble shooting

Types of Memory Dumps


Complete memory dump
Kernel memory dump
Small memory dump (256 kb)
Automatic memory dump


%SystemRoot%\MEMORY.DMP

RAID Levels

What is RAID?

RAID stands for Redundant Array of Inexpensive Disks which was later interpreted to Redundant Array of Independent Disks. This technology is now used in almost all the IT organizations looking for data redundancy and better performance. It combines multiple available disks into 1 or more logical drive and gives you the ability to survive one or more drive failures depending upon the RAID level used.

Why to use RAID?

With the increasing demand in the storage and data world wide the prime concern for the organization is moving towards the security of their data. Now when I use the term security, here it does not means security from vulnerable attacks rather than from hard disk failures and any such relevant accidents which can lead to destruction of data. Now at those scenarios RAID plays it magic by giving you redundancy and an opportunity to get back all your data within a glimpse of time.

Levels

Now with the moving generation and introduction of new technologies new RAID levels started coming into the picture with various improvisation giving an opportunity to organizations to select the required model of RAID as per their work requirement.

Now here I will be giving you brief introduction about some of the main RAID levels which are used in various organizations.

RAID 0

This level strips the data into multiple available drives equally giving a very high read and write performance but offering no fault tolerance or redundancy. This level does not provides any of the RAID factor and cannot be considered in an organization looking for redundancy instead it is preferred where high performance is required.

Calculation:
No. of Disk: 5
Size of each disk: 100GB

Usable Disk size: 500GB

Pros
Cons
Data is stripped into multiple drives
No support for Data Redundancy
Disk space is fully utilized
No support for Fault Tolerance
Minimum 2 drives required
No error detection mechanism
High performance
Failure of either disk results in complete data loss in respective array

RAID 1

This level performs mirroring of data in drive 1 to drive 2. It offers 100% redundancy as array will continue to work even if either disk fails. So organization looking for better redundancy can opt for this solution but again cost can become a factor.

Calculation:
No. of Disk: 2
Size of each disk: 100GB

Usable Disk size: 100GB

Pros
Cons
Performs mirroring of data i.e identical data from one drive is written to another drive for redundancy.
Expense is higher (1 extra drive required per drive for mirroring)
High read speed as either disk can be used if one disk is busy
Slow write performance as all drives has to be updated
Array will function even if any one of the drive fails

Minimum 2 drives required


RAID 2

This level uses bit-level data stripping rather than block level. To be able to use RAID 2 make sure the disk selected has no self disk error checking mechanism as this level uses external Hamming code for errordetection. This is one of the reason RAID is not in the existence in real IT world as most of the disks used these days come with self error detection. It uses an extra disk for storing all the parity information

Calculation:
Formula: n-1 where n is the no. of disk

No. of Disk: 3
Size of each disk: 100GB

Usable Disk size: 200GB

Pros
Cons
BIT level stripping with parity
It is used with drives with no built in error detection mechanism
One designated drive is used to store parity
These days all SCSI drives have error detection
Uses Hamming code for error detection
Additional drives required for error detection

RAID 3

This level uses byte level stripping along with parity. One dedicated drive is used to store the parity information and in case of any drive failure the parity is restored using this extra drive. But in case the parity drive crashes then the redundancy gets affected again so not much considered in organizations.

Calculation:
Formula: n-1 where n is the no. of disk

No. of Disk: 3
Size of each disk: 100GB

Usable Disk size: 200GB


Pros
Cons
BYTE level stripping with parity
Additional drives required for parity
One designated drive is used to store parity
No redundancy in case parity drive crashes
Data is regenerated using parity drive
Slow performance for operating on small sized files
Data is accessed parallel

High data transfer rates (for large sized files)

Minimum 3 drives required


RAID 4

This level is very much similar to RAID 3 apart from the feature where RAID 4 uses block level stripping rather than byte level.

Calculation:
Formula: n-1 where n is the no. of disk

No. of Disk: 3
Size of each disk: 100GB

Usable Disk size: 200GB

Pros
Cons
BLOCK level stripping along with dedicated parity
Since only 1 block is accessed at a time so performance degrades
One designated drive is used to store parity
Additional drives required for parity
Data is accessed independently
Write operation becomes slow as every time a parity has to be entered
Minimum 3 drives required

High read performance since data is accessed independently.


RAID 5

It uses block level stripping and with this level distributed parity concept came into the picture leaving behind the traditional dedicated parity as used in RAID 3 and RAID 5.  Parity information is written to a different disk in the array for each stripe. In case of single disk failure data can be recovered with the help of distributed parity without affecting the operation and other read write operations.

Calculation:
Formula: n-1 where n is the no. of disk

No. of Disk: 4
Size of each disk: 100GB

Usable Disk size: 300GB


Pros
Cons
Block level stripping with DISTRIBUTED parity
In case of disk failure recovery may take longer time as parity has to be calculated from all available drives
Parity is distributed across the disks in an array
Cannot survive concurrent drive failures
High Performance

Cost effective

Minimum 3 drives required


RAID 6

This level is an enhanced version of RAID 5 adding extra benefit of dual parity. This level uses block level stripping with DUAL distributed parity. So now you can get extra redundancy. Imagine you are using RAID 5 and 1 of your disk fails so you need to hurry to replace the failed disk because if simultaneously another disk fails then you won't be able to recover any of the data so for those situations RAID 6 plays its part where you can survive 2 concurrent disk failures before you run out of options.

Calculation:
Formula: n-2 where n is the no. of disk

No. of Disk: 4
Size of each disk: 100GB

Usable Disk size: 200GB

Pros
Cons
Block level stripping with DUAL distributed parity
Cost Expense can become a factor
2 parity blocks are created
Writing data takes longer time due to dual parity
Can survive concurrent 2 drive failures in an array

Extra Fault Tolerance and Redundancy

Minimum 4 drives required


RAID 0+1

This level uses RAID 0 and RAID 1 for providing redundancy. Stripping of data is performed before Mirroring. In this level the overall capacity of usable drives is reduced as compared to other RAID levels. You can sustain more than one drive failure as long as they are not in the same mirrored set.

NOTE: The no. of drives to be created should always be in the multiple of 2

Calculation:
Formula: n/2 * size of disk (where n is the no. of disk)

No. of Disk: 8
Size of each disk: 100GB

Usable Disk size: 400GB

Pros
Cons
No parity generation
Costly as extra drive is required for each drive
Performs RAID 0  to strip data and RAID 1 to mirror
100% disk capacity is not utilized as half is used for mirroring
Stripping is performed before Mirroring
Very limited scalability
Usable capacity is n/2 * size of disk (n = no. of disks)

Drives required should be multiple of 2

High Performance as data is stripped


RAID 1+0 (RAID 10)

This level performs Mirroring of data prior stripping which makes it much more efficient and redundant as compared to RAID 0+1. This level can survive multiple simultaneous drive failures. This can be used in organizations where high performance and security are required. In terms of fault Tolerance and rebuild performance it is better than RAID 0+1.

NOTE: The no. of drives to be created should always be in the multiple of 2

Calculation:
Formula: n/2 * size of disk (where n is the no. of disk)

No. of Disk: 8
Size of each disk: 100GB

Usable Disk size: 400GB

Pros
Cons
No Parity generation
Very Expensive
Performs RAID 1 to mirror and RAID 0 to strip data
Limited scalability
Mirroring is performed before stripping

Drives required should be multiple of 2

Usable capacity is n/2 * size of disk (n = no. of disks)

Better Fault Tolerance than RAID 0+1

Better Redundancy and faster rebuild than 0+1

Can sustain multiple drive failures