All Netapp Cifs 2

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 96

Changing the storage system domain

If you already configured your storage system for Windows Domain authentication and you want
to move the storage system to a different domain, you need to rerun the cifssetup
command.
Before you begin

You must have an administrative account with permissions to add any Windows server to the
domain.
About this task

After you change the storage systems domain, Data ONTAP updates the membership of the
BUILTIN\Administrators group to reflect the new domain. This change ensures that the new
domains Administrators group can manage the storage system even if the new domain is not a
trusted domain of the old domain.
Note: Until you put the CIFS server into a new domain or workgroup, you can cancel the CIFS
setup process and return to your old settings by pressing Ctrl-c and then entering the cifs
restartcommand.
Steps

1. If CIFS is currently running, enter the following command:


cifsterminate

2. Enter the following command:


cifssetup

The following prompt appears:


Doyouwanttodeletetheexistingfileraccountinformation?[no]

3. Delete your existing account information by entering yesat the prompt.


Note: You must delete your existing account information to reach the DNS server entry

prompt. After deleting your account information, you are given the opportunity to rename the
storage system:
Thedefaultnameofthisfilerwillbe'system1'.
Doyouwanttomodifythisname?[no]:

4. Keep the current storage system name by pressing Enter; otherwise, enter yesand enter a
new storage system name.
Data ONTAP displays a list of authentication methods:
DataONTAPCIFSservicessupportfourstylesofuserauthentication.
Choosetheonefromthelistbelowthatbestsuitsyoursituation.
(1)ActiveDirectorydomainauthentication(ActiveDirectorydomains
only)
(2)WindowsNT4domainauthentication(WindowsNTorActive
Directorydomains)

(3)WindowsWorkgroupauthenticationusingthefiler'slocaluser
accounts
(4)/etc/passwdand/orNIS/LDAPauthentication
Selection(14)?[1]:

5. Accept the default method for domain authentication (Active Directory) by pressing Enter;
otherwise, choose a new authentication method.
6. Respond to the remainder of the cifssetupprompts; to accept a default value, press Enter.
Upon exiting, the cifssetuputility starts CIFS.
7. Confirm your changes by entering the following command:
cifsdomaininfo

Data ONTAP displays the storage system's domain information.

Support for the Windows Owner Rights security principal


Data ONTAP 8.1.2 and later releases support the Owner Rights security principal. You can add
this security principal to a file or directory DACL (discretionary access-control list) to override
the default behavior of owners of files or directories.
Owner Rights is a well-known security principal with the well-known security identifier (SID)
S-1-3-4 that is available with Windows 2008 server and Windows Vista and later releases. By
default, when a user creates a file or directory, that user is the owner of the file and directory, and
the owner has certain default rights. The administrator can override the default rights by adding
the Owner Rights principle to a file or directory's DACL, and then specifying the desired
permissions by assigning ACEs (Access Control Entries) to the Owner Rights security
principal.
You can use either of the following methods to manage the Owner Rights security principal for
files or directories residing within volumes:
By using the Security tab on the Windows Properties window
By using the secedit utility, available as a software download from the support site

If you reconfigure CIFS with the cifssetupcommand when a UNIX-based KDC is


configured for NFS, Data ONTAP renames your UNIX keytabfile to include the string UNIX.
To rename the keytabfile for UNIX-based KDCs, enter yeswhen Data ONTAP displays the
following message
prompt during CIFS reconfiguration:
***Setuphasdetectedthatthisfilerisconfiguredtosupport

Kerberos***authenticationwithNFSclientsusinganonActive
DirectoryKDC.If***youchooseoption1below,toallowNFSto
usethenonActive***DirectoryKDC,yourexistingkeytabfile
'/etc/krb5.keytab'willbe***renamedto'/etc/UNIX_krb5.keytab'.
NFSwillbeusingthenewkeytab***file'/etc/UNIX_krb5.keytab'.
Doyouwanttocontinue.(Yes/No)?

If you enter yes, Data ONTAP renames the keytabfile for UNIX-based KDCs; if you enter no
or press Enter, Data ONTAP terminates the CIFS reconfiguration process. This renaming is
needed for Kerberos multirealm configurations.

Share naming conventions


Share naming conventions for Data ONTAP are the same as for Windows. You should keep Data
ONTAP share naming conventions in mind when you create a share.
For example, share names ending with the $ character are hidden shares, and certain share names,
such as ADMIN$ and IPC$, are reserved.
Share names are not case-sensitive.

About the forcegroup option


When you create a share from the Data ONTAP command line, you can use the forcegroup
option to specify that all files created by CIFS users in that share belong to the same group (that
is, the forcegroup), which must be a predefined group in the UNIX group database.
Specifying a forcegroup is meaningful only if the share is in a UNIX or mixed qtree. There is no
need to use forcegroups for shares in an NTFS qtree because access to files in these shares is
determined by Windows permissions, not GIDs.
If a forcegroup has been specified for a share, the following becomes true of the share:
CIFS users in the forcegroup who access this share are temporarily changed to the GID of the
forcegroup.
This GID enables them to access files in this share that are not accessible normally with their
primary GID or UID.
All files in this share created by CIFS users belong to the same forcegroup, regardless of the
primary GID of the file owner.
When CIFS users try to access a file created by NFS, the CIFS users' primary GIDs determine
access rights.
The forcegroup does not affect how NFS users access files in this share. A file created by NFS
acquires the GID from the file owner. Determination of access permissions is based on the UID
and primary GID of the NFS user who is trying to access the file.
Using a forcegroup makes it easier to ensure that files can be accessed by CIFS users belonging
to various groups. For example, if you want to create a share to store the company's web pages
and give write access to users in Engineering and Marketing, you can create a share and give

write access to a forcegroup named webgroup1. Because of the forcegroup, all files created by
CIFS users in this share are owned by the web group. In addition, users are automatically
assigned the GID of the web group when accessing the share. As a result, all the users can write
to this share without your managing the access rights of the Engineering and Marketing groups.
Example
The following command creates a webpages share that is accessible on the web in
the /vol/vol1/companyinfodirectory with a maximum of 100 users and in which all files
that CIFS users create are owned by all users:
cifssharesaddwebpages/vol/vol1/companyinfocomment"Product
Information"forcegroupwebgroup1maxusers100

Specifying nocomment, nomaxusers, noforcegroup, and noumaskclears the share's


description, maximum number of users, forcegroup, and umask values, respectively.

Specifying permissions for newly created files and directories in a share


You can specify the permissions of newly created files and directories in a share having mixed or
UNIX qtree security style by setting the share's umaskoption.
About this task

You must specify the share's umaskoption as an octal (base-8) value. The default umaskvalue
is 0.
Note: The value of a share's umaskoption does not affect NFS.

Enabling or disabling browsing


You can enable or disable browsing to allow users to see or prevent users from seeing a specific
share.
Before you begin

You must enable the browseoption on each share for which you want to enable browsing.

Enabling or disabling virus scanning


You can enable or disable virus scanning on one or more shares to increase security or
performance, respectively.
About this task

By default, Data ONTAP scans any file that a client opens for viruses.
Step

1. Perform one of the following actions.

Setting client-side caching properties for a share


You can set client-side caching properties for a share using the Computer Management application
on Windows 2000, XP, and 2003 clients.

About access-based enumeration


When access-based enumeration (ABE) is enabled on an SMB share, users who do not have
permission to access the contents of a shared folder do not see that shared resource displayed in
their environment.
Conventional share properties allow you to specify which users (individually or in groups) have
permission to view or modify shared resources. However, they do not allow you to control
whether shared folders or files are visible to users who do not have permission to access them.
This could pose problems if the names of shared folders or files describe sensitive information,
such as the names of customers or products under development.
Access-based enumeration (ABE) extends share properties to include the enumeration of shared
resources. ABE therefore enables you to filter the display of shared resources based on user
access rights. In addition to protecting sensitive information in your workplace, ABE enables you

to simplify the display of large directory structures for the benefit of users who do not need
access to your full range of content.

Deleting a share from the Data ONTAP command line


You can use the cifssharescommand to delete a share from the Data ONTAP command
line.
Step

1. Enter the following command:


cifssharesdelete[f]sharename
foption forces all files closed on a share without prompting. This is useful when using the

command in scripts.
sharenamespecifies the name of the share you want to delete

Changing a share-level ACL from the Data ONTAP command line


You can change a share-level ACL from the Data ONTAP command line by using the cifsaccess
command.
Step

1. Enter the following command:


cifsaccessshare[g]userrights
shareis the name of the share (you can use the * and ? wildcards).
useris the name of the user or group (UNIX or Windows).
If useris a local group, specify the storage system name as the domain name (for example,

toaster\writers).
rightsare the access rights. For Windows users, you specify one of these choices of access
rights: NoAccess, Read, Change, FullControl. For UNIX users, you specify one of these
choices of access rights: r(read), w(write), x(execute).
Use the -goption to specify that useris the name of a UNIX group

Managing home directories


You can create user home directories on the storage system and configure Data ONTAP to
automatically offer each user a home directory share.
About this task

From the CIFS client, the home directory works the same way as any other share to which the
user can connect.
Each user can connect only to his or her home directories, not to home directories for other users.

About home directories on the storage system


Data ONTAP maps home directory names to user names, searches for home directories that you
specify, and treats home directories slightly differently than regular shares
Data ONTAP offers the share to the user with a matching name. The user name for matching can be
a Windows user name, a domain name followed by a Windows user name, or a UNIX user name.
Home directory names are not case-sensitive.
When Data ONTAP tries to locate the directories named after the users, it searches only the paths
that you specify. These paths are called home directory paths. They can exist in different volumes.
The following differences exist between a home directory and other shares:

You cannot change the share-level ACL and the comment for a home directory.
The cifs shares command does not display the home directories.
The format of specifying the home directory using the Universal Naming Convention (UNC) is
sometimes different from that for specifying other shares.
If you specify /vol/vol1/enghomeand /vol/vol2/mktghomeas the home directory paths, Data
ONTAP searches these paths to locate user home directories. If you create a directory for jdoe in
the /vol/vol1/enghomepath and a directory for jsmith in the /vol/vol2/mktghomepath, both
users are offered a home directory. The home directory for jdoe corresponds to the /vol/vol1/
enghome/jdoedirectory, and the home directory for jsmith corresponds to the /vol/vol2/
mktghome/jsmithdirectory.

How Data ONTAP matches a directory with a user


You can specify the naming style of home directories to determine how Data ONTAP matches a
directory with a user.
These are the naming styles that you can choose from, and some information about each style:
Windows name
Data ONTAP searches for the directory whose name matches the users Windows name.
Hidden name
If the naming style is hidden, users connect to their home directories using their Windows user
name with a dollar sign appended to it (name$), and Data ONTAP searches for a directory that
matches the Windows user name (name).
Windows domain name and Windows name
If users from different domains have the same user name, they must be differentiated using the
domain name.
In this naming style, Data ONTAP searches for a directory in the home directory path that
matches the domain name. Then it searches the domain directory for the home directory that
matches the user name.
Example: To create a directory for engineering\jdoe and a directory for marketing\jdoe, you
create the two directories in the home directory paths. The directories have the same names as the
domain names (engineering and marketing). Then you create user home directories in these
domain directories.
Mapped UNIX name
If the naming style is UNIX, Data ONTAP searches for the directory that matches the users
mapped UNIX name.
Example: If John Does Windows name jdoe maps to the UNIX name johndoe, Data ONTAP
searches the home directory paths for the directory named johndoe (not jdoe) and offers it as the
home directory to John Doe.
If you do not specify a home directory naming style, Data ONTAP uses the users Windows name
for directory matching. This is the same style used by versions of Data ONTAP prior to version 6.0.

Specifying home directory paths


Data ONTAP searches the home directory paths in the order you specify for the directory that
matches the user name. You can specify home directory paths by editing the /etc/
cifs_homedir.cfgfile.

About this task

You can specify multiple home directory paths. Data ONTAP stops searching when it finds the
matching directory.
You can add an extension to the home directory path if you do not want users to access the top level
of their home directories. The extension specifies a subdirectory that is automatically opened when
users access their home directories.
You can change the home directory paths at any time by changing the entries in the
cifs_homedir.cfgfile. However, if a user has open files in a home directory path that you remove
from the list, Data ONTAP displays a warning message and requests a confirmation for the change.
Changing a directory path that contains an open file terminates the connection to the home directory.
Data ONTAP creates a default cifs_homedir.cfgfile in the /etcdirectory when CIFS starts, if
the file does not already exist. Changes to this file are processed automatically whenever CIFS starts.
You can also process changes to this file by using the cifshomedirloadcommand.
Steps

1. Create directories to use as home directory paths.


For example, in the /vol/vol0volume, create a directory named enghome.
2. Open the /etc/cifs_homedir.cfgfile for editing.
3. Enter the home directory path names created in Step 1 in the /etc/cifs_homedir.cfgfile,
one entry per line, to designate them as the paths where Data ONTAP searches for user home
directories.
You can enter up to 1,000 path names.
4. Enter the following command to process the entries:
cifshomedirload[f]
The foption forces the use of new paths.

Displaying the list of home directory paths


You can use the cifshomedircommand to display the current list of directory paths.
Step

1. Enter the following command:


cifshomedir
Note: If you are using the hidden naming style for home directories, when you display the list

of home directory paths, Data ONTAP automatically appends a dollar sign to the home
directory name (for example, name$).
Result

If you are using the hidden naming style for home directories, home directories are not displayed in
the following cases:
In DOS, when you use the netview\\filercommand
In Windows, when you use an Explorer application to access the storage system and display
home directory folders

Specifying the naming style of home directories


You can specify the naming style used for home directories by setting the
cifs.home_dir_namestyleoption.
Step

1. Enter the following command:


optionscifs.home_dir_namestyle{ntname|hidden|domain|mapped|""}

Use ntnameif the home directories have the same names as the Windows user names.
Use hiddenif you want to use a Windows user name with a dollar sign ($) appended to it to
initiate a search for a home directory with the same name as the Windows user name.
Use domainif you want to use the domain name in addition to the Windows user name to search
for the home directory.
Use mappedif the home directories have the UNIX user names as specified in the usermap.cfg
file.
Use ""if you do not want to specify a name style and want Data ONTAP to match home
directories to users by following any symbolic link that points to a directory outside the home
directory path to locate a home directory.
By default, the cifs.home_dir_namestyleoption is "".

Creating directories in a home directory path (non-domain-naming


style)
If the cifs.home_dir_namestyleoption is not domain, you can create home directories by
creating the directories and making the users the owners of the directories.
Steps

1. In the specified home directory paths, create home directories.


Example

For example, if there are two users, jsmith and jdoe, create the /vol/vol0/enghome/jsmith
and /vol/vol1/mktghome/jdoehome directories.
Users can attach to the share that has the same name as their user name and start using the share
as their home directory.
2. Make each user the owner of his or her home directory.
Example

For example, make jsmith the owner of the /vol/vol0/enghome/jsmithhome directory and
jdoe the owner of the /vol/vol1/mktghome/jdoehome directory.
Note: If the naming style is hidden, users must enter their user name with a dollar sign

appended to it (for example, name$) to attach to their home directory.


The user with the name engineering\jsmith can attach to the share named jsmith, which
corresponds to the /vol/vol0/enghome/engineering/jsmithhome directory.
The user with the name marketing\jdoe can attach to the share named jdoe, which corresponds to
the /vol/vol1/mktghome/marketing/jdoehome directory.

Creating subdirectories in home directories when a home directory


path extension is used
You can create subdirectories that users can access in their home directories if you use a home
directory path extension.
Step

1. For each home directory that resides in a home directory path with an extension, create a
subdirectory that you want users to access.
For example, if the /etc/cifs_homedir.cfgfile includes the /vol/vol0/enghome/%u%/
datapath, create a subdirectory named data in each home directory.
Users can attach to the share that has the same name as their user name. When they read or write
to the share, they effectively access the data subdirectory.

Disabling home directories


You can stop offering home directories by deleting the /etc/cifs_homedir.cfgfile. You cannot
use the cifssharesdeletecommand to delete home directories.
Step

1. Delete the /etc/cifs_homedir.cfgfile on the storage system

Managing local users and groups


This section provides information about creating and managing local users and groups on the storage
system.

Managing local users


Local users can be specified in user and group lists. For example, you can specify local users in filelevel
ACLs and share-level ACLs. You can also add local users to local groups.
When you should create local user accounts
There are several reasons for creating local user accounts on your storage system.
You should create one or more local user accounts if your system configuration meets the following
criteria:
If, during setup, you configured the storage system to be a member of a Windows workgroup.
In this case, the storage system must use the information in local user accounts to authenticate
users.
If your storage system is a member of a domain:
Local user accounts enable the storage system to authenticate users who try to connect to the
storage system from an untrusted domain.
Local users can access the storage system when the domain controller is down or when
network problems prevent your storage system from contacting the domain controller.
For example, you can define a BUILTIN\Administrator account that you can use to access the
storage system even when the storage system fails to contact the domain controller.
Note: If, during setup, you configured your storage system to use UNIX mode for authenticating

users, you should not create local user accounts. In UNIX mode, the storage system always
authenticates users using the UNIX password database.

Displaying the storage system's authentication method


You can display the storage system's authentication method, and thus determine whether you
should create local users and groups, by entering the cifssessionscommand.
Step

1. Enter the following command:


cifssessions

For more information, see the na_cifs_sessions(1) man page.


Limitations of local user accounts
There are several limitations with local user accounts.
You cannot use User Manager to manage local user accounts on your storage system.
You can use User Manager in Windows NT 4.0 only to view local user accounts.
If you use User Manager in Windows 2000, however, you cannot use the Users menu to view
local users. You must use the Groups menu to display local users.
You can create a maximum of 96 local user accounts.
Adding, displaying, and removing local user accounts
You can add, display, and remove local user accounts by using the useradmincommand.
About this task
You use the useradmincommand for creating, displaying, and deleting administrative users on the
storage system. (You can also use this command to manage non-local users through the domainuser

subcommand.)

Managing local groups


You can manage local groups to control which users have access to which resources.
About this task

A local group can consist of users or global groups from any trusted domains. Members of a local
group can be given access to files and resources.
Membership in certain well-known local groups confers special privileges on the storage system. For
example, members of BUILTIN\Power Users can manipulate shares, but have no other
administrative capabilities.
CIFS clients display the name of a local group in one of the following formats:
FILERNAME\localgroup
BUILTIN\localgroup
Adding, displaying, and removing local groups from the Data ONTAP command line
You can add, display, and remove local groups from the Data ONTAP command line by using the
useradmincommand.

Managing authentication and network services


This section provides information about storage system authentication, as well as procedures for

managing the older NetBIOS protocol.

Understanding authentication issues


Your storage system supports three types of authentication: UNIX authentication, Windows
workgroup authentication, and Kerberos authentication.

About UNIX authentication


Using UNIX mode, authentication is performed using entries in the /etc/passwdfile and/or
using NIS/LDAP-based authentication.
Using UNIX authentication:
Passwords are sent in the clear (unencrypted).
Authenticated users are given credentials with no unique, secure user identification (SID).
The storage system verifies the received password against a hash (algorithmic variant) of the
user password.
Passwords are not stored on the storage system.
In order to provide UNIX client authentication, the following items must be configured:
Client information must be in the storage system /etc/passwdfile.
Client information must be entered in NIS and/or LDAP.
Windows client registries must be modified to allow plain text passwords.
Because UNIX authentication transmits unencrypted passwords, Windows clients require a
Registry edit to enable them to send passwords without encryption. Clients that are not properly
configured to send clear text passwords to the storage system might be denied access and display
an error message similar to the following:
Systemerror1240hasoccurred.
Theaccountisnotauthorizedtologinfromthisstation.

Refer to Microsoft support for information to enable plain text passwords, to allow clients to use
UNIX authentication.
About Windows workgroup authentication
Workgroup authentication allows local Windows client access.
The following facts apply to workgroup authentication:
Does not rely upon a domain controller
Limits storage system access to 96 local clients
Is managed using the storage systems useradmincommand

Kerberos authentication for CIFS


With Kerberos authentication, upon connection to your storage system, the client negotiates the
highest possible security level. However, if the client cannot use Kerberos authentication, Microsoft
NTLM or NTLM V2 is used to authenticate with the storage system.

Selecting domain controllers and LDAP servers


Upon startup and as listed below, your storage system searches for a Windows domain controller.
This section describes how and when the storage system finds and selects domain controllers.
About this task

The storage system searches for domain controllers where any of the following is true:
The storage system has been started or rebooted.
A cifsresetdccommand has been issued.
Four hours have elapsed since the last search.
Note: Active Directory LDAP servers are searched for under the same conditions.

Understanding the domain controller discovery process


When you run CIFS in a domain environment, your storage system attempts to rediscover all of
its domain controllers by sending Internet Control Message Protocol (ICMP) packets once every
4 hours. Doing so enables it to verify that the current domain controller is still accessible and to
prioritize available domain controllers using the packets round trip time.
If a storage system loses access to a domain controller with a very good connection rate and has
to go to a backup domain controller with a slower rate, the storage system rediscovers domain
controllers every 2 minutes until it finds a better connection. After the storage system finds that
connection, it connects to the new domain controller and returns to sending discovery packets
every 4 hours.
The following table describes the domain controller discovery process and priority groups. The
storage system only progresses to a lower priority group when it has failed to contact all domain
controllers in the priority group above it.
Ensuring successful authentication with Windows Server 2008 R2 domain controllers
If your CIFS domain contains Windows Server 2008 R2 domain controllers, you need to take certain
steps to ensure successful authentication.
About this task

Data ONTAP requires either a writable domain controller or a read-only domain controller that is
configured to replicate passwords for the storage system.
Specifying a list of preferred domain controllers and LDAP servers
You can specify a list of preferred domain controllers and LDAP servers using the cifsprefdc
addcommand.
Step

1. Enter the following command:


cifsprefdcadddomainaddress[address...]
domainspecifies the domain for which you want to specify domain controllers or LDAP servers.
addressspecifies the IP address of the domain controller or LDAP server.

Example
The following command specifies two preferred domain controllers for the lab domain.

cifsprefdcaddlab10.10.10.1010.10.10.11

Note: To force the storage system to use a revised list of preferred domain controllers, or
LDAP servers, use the cifsresetdccommand

Deleting servers from the preferred domain controller list


You can use the cifsprefdcdeletecommand to delete entries from the preferred domain
controller list. You should use this command for example to remove servers from the list that are
not online anymore or no longer serving as domain controllers.
Steps

1. Enter the following command:


cifsprefdcdeletedomain
domainis the domain where the preferred domain controller or LDAP server resides.

2. Enter the following command:


cifsresetdc[domain]
domainis the domain you specified in step one.

After you delete a domain from the prefdclist, you should always enter the cifsresetdc
command to update the storage systems available domain controller information. The storage
system does not update the domain controller discovery information from network services when
the preferred domain controller list is updated. Failure to reset the domain controller information
can cause a connection failure, if the storage system tries to establish a connection with an
unavailable domain controller (or LDAP server).
Note: Storage systems do not automatically perform domain controller discovery operations

upon restart; restarting the storage system does not update the available domain controller and
LDAP server list.
Displaying a list of preferred domain controllers and LDAP servers
You can use the cifsprefdcprintcommand to display a list of preferred domain
controllers and LDAP servers.
Step

1. Enter the following command:


cifsprefdcprint[domain]
domainis the domain for which you want to display domain controllers. When a domain is not

specified, this command displays preferred domain controllers for all domains.
Reestablishing the storage system connection with a domain
You can use the cifsresetdccommand to reestablish the storage system connection with a
domain.
About this task

The following procedure disconnects your storage system from the current domain controller and

establishes a connection between the storage system and a preferred domain controller. It also forces
domain controller discovery, updating the list of available domain controllers.
Note: This procedure also reestablishes LDAP connections, and performs LDAP server discovery.
Step

1. Enter the following command:


cifsresetdc[domain]
domainis the domain from which the storage system disconnects. If it is omitted, the storage

system disconnects from the domain in which the storage system is installed.

Monitoring CIFS activity


This section provides information about monitoring CIFS sessions activity and collecting storage
system statistics.
About this task

You can display the following types of session information:


A summary of session information, which includes storage system information and the number of
open shares and files opened by each connected user.
Share and file information about one connected user or all connected users, which includes
The names of shares opened by a specified connected user or all connected users
The access levels of opened files
Security information about a specified connected user or all connected users, which includes
the UNIX UID and a list of UNIX groups and Windows groups to which the user belongs.

Displaying a summary of session information


You can use the cifssessionscommand to display a summary of session information.
Step

1. Enter the following command:


cifssessions

CIFS resource limitations


Access to some CIFS resources is limited by your storage system's memory and the maximum
memory available for CIFS services.
These resources include:
Connections
Shares
Share connections
Open files
Locked files
Locks
Note: If your storage system is not able to obtain sufficient resources in these categories, contact
technical support.

Disconnecting a selected user from the command line


You can use the cifsterminatecommand to disconnect a selected user from the command
line.
Steps

1. To display a list of connected clients, enter the following command:


cifssessions*

2. To disconnect a client, enter the following command:


cifsterminateclient_name_or_IP_address[ttime]
client_name_or_IP_addressspecifies the name or IP address of the workstation that you

want to disconnect from the storage system.


timespecifies the number of minutes before the client is disconnected from the storage system.
Entering 0 disconnects the client immediately.
Note: If you do not specify time and Data ONTAP detects an open file with the client, Data

ONTAP prompts you for the number of minutes it should wait before it disconnects the client.

Disabling CIFS for the entire storage system


The disabling of CIFS service is not persistent across reboots. If you reboot the storage system
after disabling CIFS service, Data ONTAP automatically restarts CIFS.
Steps

1. To disable CIFS service, enter the following command:


cifsterminate[ttime]
timeis the number of minutes before the storage system disconnects all clients and terminates
CIFS service. Entering 0makes the command take effect immediately.
Note: If you enter the cifsterminatecommand without an argument and Data ONTAP

detects an open file with any client, Data ONTAP prompts you for the number of minutes it
should wait before it disconnects the client.
2. Perform one of the following actions:
If you want CIFS service to...
Then...
Restart automatically after the next storage system reboot -----------------------------------Do nothing.
Not restart automatically after the next storage system reboot Rename the-------------/etc/cifsconfig.cfg
file.

Result

Data ONTAP sends a message to all connected clients, notifying the users of the impending
disconnection. After the specified time has elapsed, the storage system disconnects all clients and
stops providing CIFS service.
After you disable CIFS for the entire storage system, most cifscommands become unavailable.
You can use the following cifscommands with CIFS disabled:
cifsprefdc
cifsrestart

cifssetup
cifstestdc

@@@@
Creating a storage system domain account before setting up CIFS
You must create the storage system domain account before the cifssetupcommand is run if your
security structure does not allow you to assign the necessary permissions to the setup program to
create the storage system domain account, or if you intend to use Windows NT4-style authentication.
About this task

If you create the storage system domain account before the cifssetupcommand is run, you must
follow these guidelines:
You do not need to assign the Create Computer Objects permission.
You can assign permissions specifically on the storage system domain account, instead of
assigning them on the storage system container.
Steps

1. In the Active Directory Users and Computers View menu, ensure that the Advanced Features
menu item is selected.
2. In the Active Directory tree, locate the Organizational Unit (OU) for your storage system, rightclick
and select New > Computer.
3. Enter the storage system (domain account) name.
You must make a note of the storage system name you entered, to ensure that you enter it
correctly when you run the cifssetupcommand later.
4. In the "Add this computer to the domain" field, specify the name of the storage system
administrator account.
5. Right-click the computer account you just created, and select Properties from the pop-up menu.
6. Click the Security tab.
7. Select the user or group that adds the storage system to the domain.
8. In the Permissions list, ensure that the following check boxes are selected:
Change Password
Write Public Information
After you finish
When the cifssetupcommand is run, you see the prompt "Please enter the new hostname." Enter

the storage system name you specified in Step 3.

Time services requirements


You must configure your storage system for time service synchronization. Many services and
applications depend on accurate time synchronization.
During CIFS setup, if the storage system is to be joined to an Active Directory domain, Kerberos
authentication is used. Kerberos authentication requires the storage system's time and the domain
controller's time to match (within 5 minutes). If the times do not match within 5 minutes, setup
and authentication attempts fail.

If you want to make Active Directory services available to CIFS, you


need the IP addresses of DNS servers that support your Windows Active
Directory domain.
If multiprotocol access is enabled on the storage system, group caching is
beneficial for CIFS access as well as NFS access. With multiprotocol

access, user mapping of CIFS users to NFS users is performed. When a


Windows user requests access to data with UNIX security style, the
Windows user is first mapped to the corresponding UNIX user. The
UNIX users groups must then be ascertained before the storage system
can determine appropriate access. Failure to enable these two options
together could lead to slow CIFS access to resources due to time spent on
NIS group lookups.
If multiprotocol access is for NTFS-security style volumes or qtrees, user
mapping also occurs.

CIFS protocol information


If your storage system is licensed for the CIFS protocol, the cifssetupcommand runs
automatically when basic setup has finished. You must provide information about the Windows
domain, WINS servers, the Active Directory service, and your configuration preferences.
You must provide the following CIFS protocol information:

@@@@

Triage Template - How to troubleshoot issues


accessing the Storage System over CIFS (7.x
and 8.x 7-Mode)
Description

TECHNICAL TRIAGE TEMPLATE


SECTION 1: Usage
Use:

Refer this TTT during a new case creation on the NetApp


Support site.

It is used by TSEs/Partners when a customer calls Support


directly, to help frame the issue.

Section 2 provides links to most common solutions and


additional resources for the Product/Technology.

Section 3 provides information to assist with


troubleshooting.

Section 4 provides steps to gather relevant data to open a


case

Product/Technology:
CIFS
Audience:
Customer/Partner/TSE working on this Product/Technology

Procedure

SECTION 2: Solutions

Common issues/solutions:

2011588: File on CIFS share is 'Locked By Another User'

2011033: CIFS user is unable to delete a folder and gets


sharing violation, even though the folder is not in use

1011940: How to troubleshoot CIFS client access problems


caused by oplock delayed breaks

3011249: What is CIFS Max Multiplex and how to increase


maximum number of simultaneous outstanding Windows
client requests that a controller allows?

2010717: CIFS Session Setup Error STATUS_ACCESS_DENIED

3010046: What is the recommended tuning for Windows


Terminal Server (WTS) CIFS clients?

2011295: Cannot view snapshots from CIFS clients nor


SnapMirror destinations: Access denied

2012905: The options cifs.scopeid setting prevents CIFS


access

1011243: How to set up CIFS auditing on the filer

1011064: How to terminate CIFS for a particular volume


without affecting other volumes' CIFS accessibility

Additional Help:

1012788: Triage Template How to troubleshoot CIFS in


Clustered Data ONTAP

Customer & Partner Support Community

SECTION 3: Troubleshooting
Client Connectivity Failure:
To troubleshoot client connectivity failures, walk-through the following three categories in order:
1. Name Resolution

2. Authentication
3. Permissions

In addition to the issues associated with connecting clients to a Netapp storage system, issues can
also arise with the following features:

Auditing

Vscan/Fpolicy

PBLK Exhaustion

Oplocks

Widelinks

Multi-Protocol Permissions

Troubleshooting Client Connectivity Failures:


Start by asking the following questions:
1. Did CIFS previously work, or is this a new issue?
2. When did the issue start?
3. Were there any changes?
4. Is the issue affecting all users, a group of users or one user?
5. Are the users and the storage system in the same domain?
6. What is the exact error they are reporting (get a screen shot of of the error by
pressing Ctrl+Print Screen)?

Name Resolution Troubleshooting:


On the client, go to a command prompt and ping the storage system by name.

If 'ping by name' fails, test pinging the storage system's IP Address

If 'ping by IP' fails, the issue is likely to be related to network connectivity


between the client and the storage system. Check for issues like Firewall,
WAN Accelerators, NAT Devices, and other components associated with the
network.

If 'ping by IP' is successful but ping by Name fails, have the customer check
the DNS configuration of the client

If pinging the storage system by both name and IP Address are successful,
move on to test authentication.

Authentication Troubleshooting:
On the client, go to Start > Run > \\<Storage_System_Name>
Note: Do not put a share after the storage system name. There are potentially two outcomes to
this test.

Connection to the storage system is successful; Explorer opens up quickly


presenting a list of all the shares on the system. If the connection is
successful and opens quickly, move on to testing the permissions.

Connection to the storage system either fails or Explorer opens slowly but
eventually presents a list of all the shares on the system. If the connection
fails or is slow, see the following article to troubleshoot slow authentication
issues:
1012838: How to troubleshoot Microsoft client authentication problems on a
NetApp Storage Controller

Permission Troubleshooting:
Once authentication has been verified as working successfully and the client can connect to the
root of the storage system, have the client navigate into a share. This can be done by either
double-clicking on the share or by performing the following:
On the client, go to Start > Run > \\<Storage_System_Name>\Share
If this fails, the issue is most likely a permission issue. See the following article to troubleshoot
permissions: 1012304: How to troubleshoot Microsoft Client permission problems on a NetApp
filer
Vscan Troubleshooting:
If you are trying to troubleshoot an issue with a Vscan, see the article, 1013399: How to
troubleshoot CIFS file access issues when vscan is involved on Data ONTAP 7.x
pBlk Exhaustion Troubleshooting:

If you are seeing messages indicating pBlk exhaustion, then see the following article to
troubleshoot the cause for the exhaustion: 1013397: How to Troubleshoot pBlk exhaustion
Oplock Delayed Break Troubleshooting:
Oplock delayed breaks are generally noticed in the filer message logs after end users report
Access Denied errors or general connectivity issues to the storage system. The error itself is not
indicative of an issue with the filer. In fact, the storage system is reporting an issue associated
with the client. If excessive oplock delayed break warnings are being reported, see the following
article to troubleshoot the cause: 1011940: How to troubleshoot CIFS client access problems
caused by oplock delayed breaks
Widelink Troubleshooting:
A symbolic link is a special file created by NFS clients that point to another file or directory.
Widelink entries are a way to redirect absolute symbolic links on the storage system. They allow
the symbolic link destination to be a CIFS share on the same storage system or on another
storage system. If there are issues with clients traversing Widelinks, see the following article to
troubleshoot the cause: 3011420: How to set up and troubleshoot Widelinks on a NetApp Storage
Controller
Multi-Protocol Permissions:
Data ONTAP supports three styles of security permissions: NTFS, UNIX, and Mixed. The NTFS
security style allows only a NTFS Security ACL on files and folders. The UNIX security style
allows only a UNIX Security ACL on files and folders. The Mixed security style allows files and
folders to have either a NTFS or UNIX ACL on a file or folder, but not both at the same time.
The solution can be considered Multi-Protocol whenever both Windows and UNIX clients
connect to the same data set. The following should be performed to troubleshoot issues with
Multi-Protocol Permission failures:

Multi-Protocol Usermapping:
The first thing that should be done when troubleshooting Multi-Protocol issues
is to verify that usermapping is setup and working correctly. This may or may
not consist of modifying the /etc/usermap.cfg file. See the following article to
understand how to troubleshoot usermapping issues:
1011076: How to perform Windows authentication and Windows-to-UNIX
usermapping

Multi-Protocol Permissions:
After usermapping has been verified to be configured correctly and working,
see the following article to resolve any additional Multi-Protocol permission
issues:

1012304: How to troubleshoot Microsoft Client permission issues on a NetApp


Filer

SECTION 4: Data required for a new case


Customer If opening a case on the NetApp Support site, be
prepared to discuss these questions with the Technical
Support Engineer that works on your case.
TSE - Copy/Paste these questions and the customers answers
into a private case note.

1. INPUT these questions:

Immediately obtain a current AutoSupport as this includes a lot of CIFS


information including the messages file.
o

If Remote Support Agent (RSA) is enabled, trigger an on-demand


AutoSupport from the RSE server.

If RSA (Remote Support Agent) is not enabled, then request an


AutoSupport.

If you are willing to install a small troubleshooting/diagnostic and data


collection tool to facilitate resolving the issue, here is the link for NetApp
System Analyzer for CIFS
(http://now.netapp.com/NOW/download/tools/ntapsa_cifs). Install it on a
system with network access to the troubled storage system(s).
o

Run the Analyzer tool within NetApp System Analyzer and address or
resolve any configuration issues called out.

If the issue persists, use the Support Report feature within the System
Analyzer for CIFS to send additional data to Support. Provide the user
with the case number and instruct them to click the Send button to
automatically upload the package through FTP.

Review the Support Report by extracting the uploaded .cab file and
clicking index.html.

Did CIFS previously work, or is this a new issue?


o

When did the issue start?

Were there any changes?

Check the messages logs at the time of the issue's start for errors.

Is LDAP configured in the options?

Is CIFS running? You can check with CIFS sessions or other CIFS commands.
o

If not, can you start it? What errors are you seeing?

If not, consider running the CIFS setup.

Error Messages
o

What error messages is the client seeing when trying to connect?

What corresponding errors are seen on the storage system?

What errors are seen in the /etc/messages logs?

What is the qtree security style set to?

Is the issue affecting all users, a group of users or one user?

Are the users and the filer in the same domain?

If the issue is affecting all users:


o

Compare the date and time of the storage system and the domain
controller.
storage1> cifs testdc

Is there a five minute difference between the storage system time and
the DC time? There will be an error in the messages if this is the issue.
If so, verify the time and the timezones are set correctly.
o

Is the storage system's domain membership okay?


filer> cifs domaininfo

Is the share set up properly?

Can the users ping the storage system by hostname and IP address?

Can the hosts be pinged from the storage system by hostname and IP
address?

If any pings fail, use the nslookup command. Perform nslookup on


hostnames and IP Addresses. If nslookup fails, go to the DNS template.

storage1> cifs shares

If the issue is affecting a group of users and some users are unaffected:

Are all affected users on the same subnet? (Network or router issue.)

Can the hosts ping the storage system by name and IP address?

Can the storage system ping the hosts by hostname and IP address?

What error is the user seeing on the client?

What error is seen on the storage system?

Are all affected users in a common Active Directory Group?

Does this group have the proper share level and file system level
permissions?

If this issue is affecting one user:


o

Can the host ping the storage system by name and IP address?

Can the storage system ping the host by hostname and IP address?

Is the user logged into the host with a valid domain account?

What errors is the user seeing on the client?

What errors are seen on the storage system?

Does the user have the correct share level and file system level
permissions?

Can the user connect to the filer using different credentials?

If opening a case on the NetApp Support site and files are


under 25 MB, please use:

For information on how to UPLOAD files via


FTP/HTTP/AsperaConnect, see KB 1010090: How to upload a
file to NetApp.

2. UPLOAD the required information:

Verify the user and IP address of the host that is unable to access CIFS.

Enable options cifs.trace_login (remember to disable when troubleshooting


to keep excessive logging to a minimum)
Enable options cifs.trace_dc_connection (remember to disable when
troubleshooting to keep excessive logging to a minimum)

Have the client attempt access before gathering AutoSupport in order to


collect cifs trace logging.

Generate a new AutoSupport.


o

If Remote Support Agent (RSA) is enabled, then go to the RSE and


trigger an AutoSupport

If RSA is not enabled, then from the filer, use storage1> options
autosupport.doit now

Verify cifs.audit.enable, cifs.audit.file_access_events.enable and


cifs.audit.logon_events.enable are on. Check by running the following
command:
storage1> options cifs.audit
o

After making sure the auditing is ON, have some users try to access
the shares. Have the caller send NetApp the /etc/log/adtlog.evt file.

If the user cannot retrieve the adtlog.evt file due to the CIFS issue,
they can turn on FTP to get the file (if they are not using FTP already)
options ftpd.enable on
options ftpd.dir.override /vol/etc/log

Send the hosts application or system logs to NetApp (whichever one is


displaying error messages).

Run a packet trace from the storage system and from the client (if possible):
o

Start the packet trace:

Attempt to connect to the storage system from the client:

Stop the packet trace:

storage1> pktt start all -d <dir path>

storage1> pktt stop all

@@@

How to troubleshoot Microsoft client authentication problems Search


on a NetApp Storage Controller
Knowledgebase
Enter your question o

KB Doc ID 1012838 Version: 8.0 Published date: 07/16/2014


Views: 14430

Description
1. Connection to the storage system is successful,
explorer opens up quickly presenting a listing of all the
shares on the system.
2. Connection to the storage system is successful,
explorer opens slowly but eventually presents a listing
of all the shares on the system.
3. Connection to the storage system fails with an error
message.
Procedure
1. Connection to the storage system is successful, MS
Explorer opens up quickly presenting a list of all the
shares on the system.
If the connection is successful and opens quickly, move
on to testing permissions.
2. Connection to the storage system is successful, MS
Explorer opens slowly but eventually presents a list of
all the shares on the system.
If the connection is successful but slow, the
problem may be caused by one of the following:
1. Slow Authentication due to slow Domain
Controllers
Run cifs stat twice waiting 30 seconds in
between and verify if the Max gAuthQueue depth
is incrementing over time:
Max Multiplex = 4, Max pBlk Exhaust = 0, Max
pBlk Reserve Exhaust = 0
Max FIDs = 107, Max FIDs on one tree = 101
Max Searches on one tree = 2, Max Core
Searches on one tree = 0

Max sessions = 6
Max trees = 11
Max shares = 12
Max session UIDs = 3, Max session TIDs = 7
Max locks = 109
Max credentials = 5
Max group SIDs per credential = 12
Max pBlks = 896 Current pBlks = 896 Num
Logons = 0
Max reserved pBlks = 32 Current reserved
pBlks = 32
Max gAuthQueue depth
= 3 <<< This is

the counter you want to look for


Max
Max
Max
Max
Max

gSMBBlockingQueue depth
gSMBTimerQueue depth
gSMBAlfQueue depth
gSMBRPCWorkerQueue depth
gOffloadQueue depth

=
=
=
=
=

2
4
1
1
4

The value of 3 or 4 is normal. Clear the counter


by running a cifs stat -z
Run the stats command to collect domain
controller latency
storage system> priv set diag
storage system*> stats show cifsdomain
cifsdomain:10.42.158.117:netlogon_latency:0ms
cifsdomain:10.42.158.117:netlogon_latency_bas
e:0
cifsdomain:10.42.158.117:lsa_latency:0ms
cifsdomain:10.42.158.117:lsa_latency_base:0
cifsdomain:10.42.158.117:samr_latency:0ms
cifsdomain:10.42.158.117:samr_latency_base:0
storage system*> priv set

If netlogon_latency is high, use cifs prefdc to


move the storage system to another DC while
the customer examines the DC for performance
issue.
storage system> cifs prefdc add domain
10.42.158.118

If netlogon_latency looks good, move to


troubleshooting slow authentication due to
Windows to UNIX Usermapping.
2. Slow Authentication because of slow Windows to
UNIX Usermapping with a dependency on
external NIS or UNIX LDAP servers.
The following KB explains the Windows to UNIX

Usermapping process (please read)


1011076: How to perform Windows
authentication and Windows-to-UNIX
usermapping
If the system is running NIS, collect the following
output from the storage system to verify the
response time of the NIS Servers. Specifically,
look at the 3 Most Recent Lookups section.
storage system> nis info
NIS domain is nis.domain.com
NIS group cache has been enabled
The group cache is not available.
IP Address Type State Bound Last Polled
Client calls Became Active
-----------------------------------------------------------------a.b.c.d
PREF NO
RESP
NO Sun Feb 8
19:03:10 GMT 2009 0
NIS Performance Statistics:
Number of YP Lookups: 4340
Total time spent in YP Lookups: 35568 ms, 746
us
Number of network re-transmissions: 0
Minimum time spent in a YP Lookup: 0 ms, 0 us
Maximum time spent in a YP Lookup: 985 ms,
903 us
Average time spent in YP Lookups: 8 ms, 195
us

Three Most Recent Lookups:


[0] Lookup time: 150 ms, 984 us Number of
network re-transmissions: 0
[1] Lookup time: 150 ms, 72 us Number of
network re-transmissions: 0
[2] Lookup time: 190 ms, 3 us Number of
network re-transmissions: 0

NIS netgroup (*.* and *.nisdomain) cache status:


Netgroup cache: uninitialized
*.* eCode: 0
*.nisdomain eCode: 0

NIS Slave disabled


If NIS lookups are taking a long time, have them
disable NIS and attempt to connect to \\storage
system name

If the connection is now fast, NIS is the issue, the

customer will need to investigate the NIS server


to determine the cause of the latency.
If the system is running UNIX LDAP (that is,
options ldap is configured), collect the following
data:
storage system> options ldap
ldap.ADdomain
mydomain.com
ldap.base
ldap.base.group
ldap.base.netgroup
ldap.base.passwd
ldap.enable
on
ldap.minimum_bind_level
anonymous
ldap.name
ldap.nssmap.attribute.gecos gecos
ldap.nssmap.attribute.gidNumber gidNumber
ldap.nssmap.attribute.groupname cn
ldap.nssmap.attribute.homeDirectory
homeDirectory
ldap.nssmap.attribute.loginShell loginShell
ldap.nssmap.attribute.memberNisNetgroup
memberNisNetgroup
ldap.nssmap.attribute.memberUid memberUid
ldap.nssmap.attribute.netgroupname
cn
ldap.nssmap.attribute.nisNetgroupTriple
nisNetgroupTriple
ldap.nssmap.attribute.uid
uid
ldap.nssmap.attribute.uidNumber uidNumber
ldap.nssmap.attribute.userPassword
userPassword
ldap.nssmap.objectClass.nisNetgroup
nisNetgroup
ldap.nssmap.objectClass.posixAccount
posixAccount
ldap.nssmap.objectClass.posixGroup
posixGroup
ldap.passwd
******
ldap.port
389
ldap.servers
192.168.1.10
ldap.servers.preferred
ldap.ssl.enable
off
ldap.timeout
20
ldap.usermap.attribute.unixaccount
unixaccount
ldap.usermap.attribute.windowsaccount
windowsaccount
ldap.usermap.base
ldap.usermap.enable
off

storage system> priv set diag


storage system*> stats show ldap
ldap:ldap:avg_latency:100ms
ldap:ldap:latency_base:2
storage system*> priv set

If LDAP latency is high, disable ldap (options


ldap.enable off) and test the connection to
\\storage system name
If the connection is now fast, the customer will
need to investigate the LDAP server to determine
the cause of the latency.
3. Connection to the storage system fails with an error
message.
Collect the error message (Ctrl+Print Screen)
On the Windows Client, go to Start -> Run -> \\<ip
address of storage system> (again do not add a
share at the end)
If this is successful, the issue likely has to do with
Kerberos authentication
Check the Time on the client, DC and storage
system and verify they are all within 5 minutes of each
other using the following steps:
From the Windows client command prompt:

C:\> net time \\localhost


Current time at \\localhost is 3/3/2011 10:04:06 PM
C:\> net time \\DC1
Current time at \\localhost is 3/3/2011 10:14:26 PM

From the storage system:

storage system> date


Thu Mar 3 10:14:31 PST 2011

If any of the times are off, correct the one that is


incorrect and connect to \\storage system name again.
The date command on the storage system can be used
to update the time on the storage system. Also, make
sure that timed is setup correctly on the storage
system.
If the connection to \\<ip address of storage system>
fails:
Check the storage system to verify it can connect to
the Domain Controllers using the following steps:
On the storage system, collect cifs domaininfo output
NetBios Domain:
CA2K3DOM
Windows 2003 Domain Name:

ca2k3dom.ngslabs.netapp.com
Type:
Windows 2003
Filer AD Site:
Default-First-Site-Name
Current Connected DCs:
\\CA2K3DOMDC
PDCBROKEN
Total DC addresses found: 1
Preferred Addresses:
None
Favored Addresses:
10.42.158.117 CA2K3DOMDC BROKEN
Other Addresses:
None
Connected AD LDAP Server:
\\ca2k3domdc.ca2k3dom.ngslabs.netapp.com
Preferred Addresses:
None
Favored Addresses:
10.42.158.117
ca2k3domdc.ca2k3dom.ngslabs.netapp
.com
Other Addresses:
None

If all the DCs show as BROKEN ( PDCBROKEN /


BDCBROKEN), check the time on both the storage
system and Domain Controller and verify they are
within 5 minutes.
storage system> date
Thu Mar 3 11:01:22 PST 2011

On the DC run the following from the command prompt:


c:\ net time \\localhost
Current time at \\localhost is 3/3/2011
10:08:06 PM

If the storage system time is off by more than 5


minutes, do the following:
1. Use the date command to fix the time on
the storage system
storage system> date 1108

2. Run cifs resetdc to re-establish the connection to


the DCs
storage system> cifs resetdc

If the storage system time is correct, check that


the machine account for the storage system in
active directory exists.
If the account no longer exists in Active Directory
or it is suspected that there may be an issue with

the account in Active Directory, re-run cifs


setup to re-create the storage system's machine
account in Active Directory.
Note: You will need a user who
has the permission to join machines to the
domain in order to re-run cifs setup.
3. Check to see if there are any firewalls, wan
optimizers, NAT devices or other network devices
that may be disrupting traffic between the client
and storage system.

If further assistance is required:


If none of the above steps resolve the authentication issue, collect the
following data and contact NetApp Technical Support.
1. On the client, use NetMon or Wireshark to start a client
side packet trace
2. On the Storage Controller, start a matching packet
trace by running the following command:
storage system> pktt start all -d /etc/crash

3. Generate the authentication failure


4. Stop the Storage Controller packet trace
storage system> pktt stop all

5. Stop the Client side packet trace


6. Enable autoindex on the Storage Controller
storage system> options httpd.autoindex.enable on

7. On a client machine open web browser and navigate


to:
http://<storage_system_name_or_IP>/na_admin/cores (if
using SSL, you can also use https://)
8. Provide login credentials if prompted. This will be the
same credentials as you would use to access FilerView .
9. Select the trace file that covers the the steps done
above. The trace files will have a trc file extension and
there will a trace file for each interface configured on
the storage system, along with a date/timestamp
naming convention. If in doubt, gather all the ones you

see for the date of the test and zip them in a single file

@@@@@
How to troubleshoot Microsoft client authentication problems on a NetApp Storage
Controller

KB Doc ID 1012838 Version: 8.0 Published date: 07/16/2014 Views: 14430

Description
1. Connection to the storage system is successful, explorer opens up quickly
presenting a listing of all the shares on the system.
2. Connection to the storage system is successful, explorer opens slowly but
eventually presents a listing of all the shares on the system.
3. Connection to the storage system fails with an error message.
Procedure
1. Connection to the storage system is successful, MS Explorer opens up quickly
presenting a list of all the shares on the system.
If the connection is successful and opens quickly, move on to testing
permissions.
2. Connection to the storage system is successful, MS Explorer opens slowly but
eventually presents a list of all the shares on the system.
If the connection is successful but slow, the problem may be caused by one
of the following:
1. Slow Authentication due to slow Domain Controllers
Run cifs stat twice waiting 30 seconds in between and verify if the
Max gAuthQueue depth is incrementing over time:

Max
= 0
Max
Max
Max
Max
Max
Max
Max
Max
Max
Max
Max
Max

Multiplex = 4, Max pBlk Exhaust = 0, Max pBlk Reserve Exhaust

Max
Max
Max
Max
Max

gSMBBlockingQueue depth
gSMBTimerQueue depth
gSMBAlfQueue depth
gSMBRPCWorkerQueue depth
gOffloadQueue depth

FIDs = 107, Max FIDs on one tree = 101


Searches on one tree = 2, Max Core Searches on one tree = 0
sessions = 6
trees = 11
shares = 12
session UIDs = 3, Max session TIDs = 7
locks = 109
credentials = 5
group SIDs per credential = 12
pBlks = 896 Current pBlks = 896 Num Logons = 0
reserved pBlks = 32 Current reserved pBlks = 32
gAuthQueue depth
= 3 <<< This is the counter you want

to look for

=
=
=
=
=

2
4
1
1
4

The value of 3 or 4 is normal. Clear the counter by running a cifs stat


-z

Run the stats command to collect domain controller latency


storage system> priv set diag
storage system*> stats show cifsdomain
cifsdomain:10.42.158.117:netlogon_latency:0ms
cifsdomain:10.42.158.117:netlogon_latency_base:0
cifsdomain:10.42.158.117:lsa_latency:0ms
cifsdomain:10.42.158.117:lsa_latency_base:0
cifsdomain:10.42.158.117:samr_latency:0ms
cifsdomain:10.42.158.117:samr_latency_base:0
storage system*> priv set

If netlogon_latency is high, use cifs prefdc to move the storage


system to another DC while the customer examines the DC for
performance issue.
storage system> cifs prefdc add domain 10.42.158.118

If netlogon_latency looks good, move to troubleshooting slow


authentication due to Windows to UNIX Usermapping.
2. Slow Authentication because of slow Windows to UNIX Usermapping
with a dependency on external NIS or UNIX LDAP servers.
The following KB explains the Windows to UNIX Usermapping process
(please read)
1011076: How to perform Windows authentication and Windows-toUNIX usermapping

If the system is running NIS, collect the following output from


the storage system to verify the response time of the NIS Servers.
Specifically, look at the 3 Most Recent Lookups section.
storage system> nis info
NIS domain is nis.domain.com
NIS group cache has been enabled
The group cache is not available.
IP Address Type State Bound Last Polled Client calls Became Active
-----------------------------------------------------------------a.b.c.d
PREF NO
RESP
NO Sun Feb 8 19:03:10 GMT 2009 0
NIS Performance Statistics:
Number of YP Lookups: 4340
Total time spent in YP Lookups: 35568 ms, 746 us
Number of network re-transmissions: 0
Minimum time spent in a YP Lookup: 0 ms, 0 us
Maximum time spent in a YP Lookup: 985 ms, 903 us
Average time spent in YP Lookups: 8 ms, 195 us

Three Most Recent Lookups:

[0] Lookup time: 150 ms, 984 us Number of network retransmissions: 0


[1] Lookup time: 150 ms, 72 us Number of network re-transmissions:
0
[2] Lookup time: 190 ms, 3 us Number of network re-transmissions:
0

NIS netgroup (*.* and *.nisdomain) cache status:


Netgroup cache: uninitialized
*.* eCode: 0
*.nisdomain eCode: 0

NIS Slave disabled


If NIS lookups are taking a long time, have them disable NIS and
attempt to connect to \\storage system name
If the connection is now fast, NIS is the issue, the customer will need to
investigate the NIS server to determine the cause of the latency.
If the system is running UNIX LDAP (that is, options ldap is configured),
collect the following data:
storage system> options ldap
ldap.ADdomain
mydomain.com
ldap.base
ldap.base.group
ldap.base.netgroup
ldap.base.passwd
ldap.enable
on
ldap.minimum_bind_level
anonymous

ldap.name
ldap.nssmap.attribute.gecos gecos
ldap.nssmap.attribute.gidNumber gidNumber
ldap.nssmap.attribute.groupname cn
ldap.nssmap.attribute.homeDirectory homeDirectory
ldap.nssmap.attribute.loginShell loginShell
ldap.nssmap.attribute.memberNisNetgroup memberNisNetgroup
ldap.nssmap.attribute.memberUid memberUid
ldap.nssmap.attribute.netgroupname cn
ldap.nssmap.attribute.nisNetgroupTriple nisNetgroupTriple
ldap.nssmap.attribute.uid
uid
ldap.nssmap.attribute.uidNumber uidNumber
ldap.nssmap.attribute.userPassword userPassword
ldap.nssmap.objectClass.nisNetgroup nisNetgroup
ldap.nssmap.objectClass.posixAccount posixAccount
ldap.nssmap.objectClass.posixGroup posixGroup
ldap.passwd
******
ldap.port
389
ldap.servers
192.168.1.10
ldap.servers.preferred
ldap.ssl.enable
off
ldap.timeout
20
ldap.usermap.attribute.unixaccount unixaccount
ldap.usermap.attribute.windowsaccount windowsaccount
ldap.usermap.base
ldap.usermap.enable
off
storage system> priv set diag
storage system*> stats show ldap
ldap:ldap:avg_latency:100ms
ldap:ldap:latency_base:2
storage system*> priv set

If LDAP latency is high, disable ldap (options ldap.enable off) and test
the connection to \\storage system name
If the connection is now fast, the customer will need to investigate the
LDAP server to determine the cause of the latency.
3. Connection to the storage system fails with an error message.
Collect the error message (Ctrl+Print Screen)
On the Windows Client, go to Start -> Run -> \\<ip address of storage
system> (again do not add a share at the end)
If this is successful, the issue likely has to do with Kerberos authentication
Check the Time on the client, DC and storage system and verify they are all
within 5 minutes of each other using the following steps:
From the Windows client command prompt:
C:\> net time \\localhost
Current time at \\localhost is 3/3/2011 10:04:06 PM
C:\> net time \\DC1
Current time at \\localhost is 3/3/2011 10:14:26 PM

From the storage system:


storage system> date

Thu Mar

3 10:14:31 PST 2011

If any of the times are off, correct the one that is incorrect and connect to
\\storage system name again.
The date command on the storage system can be used to update the time on
the storage system. Also, make sure that timed is setup correctly on the
storage system.
If the connection to \\<ip address of storage system> fails:
Check the storage system to verify it can connect to the Domain Controllers
using the following steps:
On the storage system, collect cifs domaininfo output
NetBios Domain:
CA2K3DOM
Windows 2003 Domain Name: ca2k3dom.ngslabs.netapp.com
Type:
Windows 2003
Filer AD Site:
Default-First-Site-Name
Current Connected DCs:
\\CA2K3DOMDC
PDCBROKEN
Total DC addresses found: 1
Preferred Addresses:
None
Favored Addresses:
10.42.158.117 CA2K3DOMDC BROKEN
Other Addresses:
None
Connected AD LDAP Server: \\ca2k3domdc.ca2k3dom.ngslabs.netapp.com
Preferred Addresses:
None
Favored Addresses:
10.42.158.117
ca2k3domdc.ca2k3dom.ngslabs.netapp.com
Other Addresses:
None

If all the DCs show as BROKEN ( PDCBROKEN / BDCBROKEN), check the time
on both the storage system and Domain Controller and verify they are within
5 minutes.
storage system> date
Thu Mar 3 11:01:22 PST 2011

On the DC run the following from the command prompt:


c:\ net time \\localhost
Current time at \\localhost is 3/3/2011 10:08:06 PM

If the storage system time is off by more than 5 minutes, do the following:
1. Use the date command to fix the time on the storage system
storage system> date 1108

2. Run cifs resetdc to re-establish the connection to the DCs


storage system> cifs resetdc

If the storage system time is correct, check that the machine account
for the storage system in active directory exists.
If the account no longer exists in Active Directory or it is suspected
that there may be an issue with the account in Active Directory, re-run
cifs setup to re-create the storage system's machine account in
Active Directory.
Note: You will need a user who has the permission to join machines to
the domain in order to re-run cifs setup.
3. Check to see if there are any firewalls, wan optimizers, NAT devices or
other network devices that may be disrupting traffic between the client
and storage system.

If further assistance is required:


If none of the above steps resolve the authentication issue, collect the following data and contact
NetApp Technical Support.
1. On the client, use NetMon or Wireshark to start a client side packet trace
2. On the Storage Controller, start a matching packet trace by running the
following command:
storage system> pktt start all -d /etc/crash

3. Generate the authentication failure


4. Stop the Storage Controller packet trace
storage system> pktt stop all

5. Stop the Client side packet trace


6. Enable autoindex on the Storage Controller
storage system> options httpd.autoindex.enable on

7. On a client machine open web browser and navigate to:


http://<storage_system_name_or_IP>/na_admin/cores (if using SSL, you can
also use https://)
8. Provide login credentials if prompted. This will be the same credentials as you
would use to access FilerView .
9. Select the trace file that covers the the steps done above. The trace files will
have a trc file extension and there will a trace file for each interface
configured on the storage system, along with a date/timestamp naming
convention. If in doubt, gather all the ones you see for the date of the test
and zip them in a single file

@@@@
How to perform Windows authentication and Windows-to-UNIX usermapping

KB Doc ID 1011076 Version: 5.0 Published date: 07/27/2015 Views: 8773

Description

How are Windows users authenticated and when are they mapped to UNIX Users?
Procedure

When a Windows user connects to a Filer, the authentication process is broken into two parts.

Finish Windows Authentication

Map the Windows User to a UNIX User

Important: Every Windows User that connects to a Filer is mapped to a UNIX user regardless of
the type of qtree they will be connecting to. This is different from users connecting via UNIX as
those Users are only mapped to Windows users when they access a NTFS or Mixed qtree.
The Windows Authentication process can be broken down into the following steps:
Finish Windows Authentication

If NTLM is being used, the Filer will then need to challenge the client, receive
the NTLM challenge back from the client and send the clients NTLM Challenge
Response, original NTLM Challenge and Username to the DC to finish
authenticating the user. If the DC calculates the same NTLM Challenge
Response as sent by the Filer based on the DC's copy of the Clients Password
Hash encrypting the original NTLM Challenge, then a successful response will
be sent back to the Filer along with other login information including all of the
Users Windows Group Memberships.

If Kerberos is being used, the Client will give the Filer the AP-REQ that it
received from the KDC. The Filer will then need to use its local copy of its
password hash to decrypt the AP-REQ.

Map the Windows User to a UNIX User


The Filer performs the following steps to Map the Windows User to a UNIX User, in the
example we will use User1 as the Windows Username, and we will assume that there is no entry
in the Usermap.cfg and LDAP and NIS are both enabled:

1. Check the Usermap.cfg file to see if an entry exists that will map the Windows
User to a UNIX User. If an entry is found, move to Step 2. If no entry is found,
map the Windows User to the same UNIX name in all lower case. (i.e.,
Domain\User1 == user1)
2. Check the configuration of the Nsswitch.conf file to see how to perform UNIX
User lookups. In this example, we will say it looks like the following:

Filer1>rdfile /etc/nsswitch.conf
hosts: files nis dns
passwd: files nis ldap
netgroup: files nis ldap
group: files nis ldap
shadow: files nis
3. Based on the entry above, we will first look for user1 in /etc/passwd. If it is
not found, we perform a NIS search to lookup user1. If it is not found in NIS,
we move to LDAP to look up user1. If user1 is not found anywhere, we lookup
the value of options wafl.default_unix_user. We will assume
wafl.default_unix_user is left to the default account of pcuser.
4. We then repeat the lookup process for pcuser which is generally found in the
/etc/passwd file. Once we have the UID of the user, we then proceed to
lookup secondary groups of the user based on the group entry of the
nsswitch.conf file.

The Windows Authentication portion will return the username, user SID, and any group SIDs
that the user is a member of. The UNIX User lookup will return the following UNIX
information: uid, uidnumber, gidnumber,userpassword, homedirectory, loginshell, gecos, and
secondary groups (other gids). Once done, the Filer will cache the entries in the Write Anywhere
File Layout(WAFL) credential cache, which can be viewed by using the wcc command on the
Filer.
Note: You can use name mapping only for users, not for groups. It is not possible to map CIFS
users to a group ID (GID), or UNIX users to a group in the Active Directory (AD). Similarly, it is
not possible to map a GID to a group or a user in AD, or an AD group to a UNIX UID or GID.
Related Link:
https://library.netapp.com/ecmdocs/ECMP1196891/html/GUID-7AB09327-2879-4066-9A7F1A25B3CB3AA7.html
Why do I see user accounts that end in a $ always get mapped to pcuser?

The Filer is mapping any windows user. A user account ending in a $ generally signifies that it is
a machine account. Since a machine account is technically a user account, the Filer will map it to
a UNIX account using the above process. In general most people do not map machine accounts
in the usermap.cfg file and thus they ultimately default to the default UNIX mapping. One
common reason machine accounts are mapped is because when a user connects to a share path,
the Windows host will first try to connect to the path via a DFS Referral using its own credentials
(ie., machinename$).
If this fails because the path is not part of a DFS Tree, the Windows host will then re-authenticate
using the user credentials and then proceed with a normal tree connect to the share.
@@@@@@@@@@@@@@@@@@@@@@@@@@@
How to troubleshoot Microsoft Client permission issues on a Data ONTAP 7-Mode
storage system

KB Doc ID 1012304 Version: 14.0 Published date: 08/29/2016 Views: 27678

Description

This article describes how to troubleshoot Microsoft Client permission issues on a Data ONTAP
7-Mode storage system. For details on troubleshooting these issues on a clustered Data ONTAP
system, see article 1014284: How to troubleshoot Microsoft Client permission issues on a
NetApp Vserver running clustered Data ONTAP.
Procedure

For all versions of Data ONTAP, there are a few steps to follow before troubleshooting
permission issues:
Example: \\filer
If this test fails, the issue is likely an Authentication or SMB Signing issue. Any Authentication
issues will need to be addressed before a permission issue can be diagnosed.
1. Ping the Storage Controller by name or use nslookup to verify that the name
resolution is working correctly. If name resolution of the Storage Controller
does not work, that will need to be addressed before a permission issue can
be diagnosed.
2. From the Windows client, go to Start > Run and connect to the root of the
Storage Controller (do not specify a share name.)

Once Name Resolution and Authentication have been determined as working successfully, use
the following steps for the appropriate version of Data ONTAP that the Storage Controller is
running.
Data ONTAP 7.2 to 7.3
Starting in Data ONTAP 7.2, the fsecurity feature was added to assist in permission
troubleshooting. The following is an example of troubleshooting a permission denied issue for a
user attempting to access \\filer\vol1\file1.txt
Filer1*> cifs shares
Name
Mount Point
-------------ETC$
/etc

Description
----------everyone / Full Control
BUILTIN\Administrators / Full Control

C$

/
everyone / Full Control
BUILTIN\Administrators / Full Control

vol1

/vol/vol1
everyone / Full Control

Filer1*> fsecurity show /vol/vol1


[/vol/vol1 - Directory (inum 64)]
Security style: NTFS
Effective style: NTFS
DOS attributes: 0x0030 (---AD---)
Unix security:
uid: 0 (root)
gid: 0
mode: 0700 (rwx------)
NTFS security descriptor:
Owner: BUILTIN\Administrators
Group: BUILTIN\Administrators
DACL:
Allow - BUILTIN\Administrators - 0x001f01ff (Full Control) - OI|CI
Filer1*> fsecurity show /vol/vol1/file1.txt
[/vol/vol1/file1.txt - File (inum 100)]
Security style: NTFS
Effective style: NTFS
DOS attributes: 0x0020 (---A----)

Unix security:
uid: 0 (root)
gid: 0
mode: 0777 (rwxrwxrwx)
NTFS security descriptor:
Owner: DOMAIN\joe
Group: DOMAIN\Domain Users
DACL: Allow - DOMAIN\Group1 - 0x001f01ff (Full Control)
Filer1*> wcc -s joe
(NT - UNIX) account name(s): (DOMAIN\joe - pcuser)
***************
UNIX uid = 65534
NT membership
DOMAIN\joe
DOMAIN\Domain Users
DOMAIN\Group1
BUILTIN\Users
User is also a member of Everyone, Network Users,
Authenticated Users
***************

1. Collect cifs shares output


2. Collect fsecurity show of the directory above the file in question:
3. Collect fsecurity show of the file in question:
4. Collect wcc -s output of the user in question:

Once the above data is collected, the following steps can be used to troubleshoot the errors:
1. Check the cifs shares output and verify that the user is explicitly defined for
Share Level access or is a member of a group that has Share Level access.
Use the output from wcc to verify the group of which the user is a member. In
the example above, the user is considered a member of the Everyone group
and therefore has access at the Share Level.
2. Check the fsecurity show output of the file and directory and examine the
Discretionary Access Control List (DACL) portion of the output. The DACL is
where permission will be allowed or denied.
Filer1*> fsecurity show /vol/vol2/file2.txt
[/vol/vol2/file2.txt - File (inum 100)]
Security style: Unix
Effective style: Unix

DOS attributes: 0x0020 (---A----)


Unix security:
uid: 0 (root)
gid: 1 (daemon)
mode: 0750 (rwxr-x---)
No security descriptor available

To troubleshoot UNIX Permissions, look at the Owner and Group attributes and see
what UID/GID they are set to. In this case, the Owner is root and Group is daemon.
Once you have that, examine the wcc output above and see if the user is mapped to a
UNIX user that is root or in the group daemon. If not, the Other permission bit takes
effect and will control whether the Windows user gets access or not. For more
information on how to map Windows users to UNIX users, see article 1011076: How to
perform Windows authentication and Windows-to-UNIX user mapping.
o

In fsecurity, if the Effective style shows as NTFS, focus on the NTFS


security descriptor portion for NTFS permission troubleshooting. The
user in the wcc -s output should be explicitly defined or in a group that
is defined in the DACL portion of fsecurity show. If this is not true,
access will not be granted. Therefore, the ACL on the file will need to
be adjusted to grant the user the correct access.

In fsecurity, if the Effective style shows as UNIX, focus on the UNIX


security portion of the output for UNIX style permission
troubleshooting. Example fsecurity output would look similar to the
following:

In fsecurity, if the Security style shows as fixed, base the permission


troubleshooting on the Effective style as labeled. Therefore if Effective
style is NTFS, focus on the NTFS permission troubleshooting above. If
Effective style is UNIX, focus on the UNIX permission troubleshooting
above.

3. If permissions have been verified and access to the files still fail, check that
Vscan and FPolicy are not applying. A simple test would be to turn Vscan and
FPolicy off and then test file access.
4. If access to files still fail, contact NetApp Technical Support.

Data ONTAP 7.3.1 and Later


Data ONTAP 7.3.1 and later added the sectrace functionality that allows the system to trace user
access through the file system. The following is an example of troubleshooting a permission
denied issue for a user attempting to access \\filer\vol1\file1.txt

Filer1*> Sat May 29 18:31:07 GMT [sectrace.filter.denied:info]: [sectrace


index: 1] Access denied because the file could not be scanned and the vscan
option mandatory_scan is set - Status: 1:985177454018560:1056:34 10.61.33.202 - NT user name: domain\administrator - UNIX user name: root(0) Qtree security style is NTFS and NT ACL is set on file/directory - Path:
/vol/vol1/New Text Document.txt

1. Obtain the IP address of the user that is reporting permission issue.


2. Create a sectrace filter
sectrace add -ip 10.61.81.117

3. Generate the client error message


4. Examine the sectrace output that is logged to the console and also saved in
the system log. The example output will look similar to the following:
5. If you are still unable to determine the cause of the permission failure after
reviewing the output of sectrace, open a case with the NetApp Technical
Support. In addition, trigger an AutoSupport and collect the fsecurity show,
and wcc -s output as described in the Data ONTAP 7.2 permission
troubleshooting section above.
6. Delete the sectrace filter after troubleshooting in order to avoid log spam.
Filer1*> sectrace delete all

@@@@@@
How to troubleshoot pBlk exhaustion

KB Doc ID 1013397 Version: 10.0 Published date: 04/12/2016 Views: 20470

Description

A large number of CIFS users log in at nearly the same time every day, causing access issues.
Users complain of a poor or a slow performance and RPC errors accessing CIFS files on their
Storage Controller.
the Storage Controller stops allowing new CIFS connections;
The Storage Controller stops serving CIFS a,nd the following might appear in the
/etc/messages log:

[storage1: cifs.trace.connectionEvent:info]: CIFS: Info for client connection


(IP): Too busy to accept new client logons.
[storage1: cifs.login.busy:warning]: (EMS parameters: clientID="192.168.1.100
()")
[cifs.stats.pBlkExhaust:info]: CIFS: All CIFS control blocks for the STANDARD
pool are in use. The request for a new control block can not be granted.

A pBlk is the primary control block that holds the context for each SMB request that passes
through Data ONTAP 7-Mode. In theory, the length of time a pBlk will be in use is
microseconds. There are situations where pBlks might be held up for longer periods of time.
Once a Storage Controller reaches pBlk exhaustion, Data ONTAP will log one or all of the above
warning messages and begin to reduce the TCP window of the currently connected CIFS clients
to reduce the number of requests they can send. If all pBlks are consumed, it will block incoming
CIFS requests from being processed until a pBlk becomes available. Running out of pBlks is the
way that Data ONTAP detects that it needs to exert back-pressure on clients to slow the rate of
incoming requests. This does not indicate that adding more pBlks would allow Data ONTAP to
do more work. The number of pBlks scales with the number of disks so that Data ONTAP does
not have all the pBlks tied up waiting for disk I/O, while in the meantime there are requests
which could be serviced from the cache.
Note: pBlks are a shared resource pool. If you have vFilers configured, the same pool of pBlks
will be shared across all the vFilers to include vFiler0 (the physical controller itself). An
additional and a separate pool of resources is not defined when you create a vFiler.
There are four causes of pBlk exhaustion. The following are the three most common causes:
1. Slow Vscan servers
2. Slow Fpolicy servers
3. Slow authentication

The last scenario occurs when there is an overloaded Storage Controller (that is, an increased
workload), but is only seen in rare circumstances. This article will go over all the four scenarios
and their contribution to pBlk exhaustion.
Before we go into solutions, a quick overview on how the number of available pBlks is
computed within Data ONTAP. Data ONTAP will assign a set number of pBlks per storage
controller. The number of pBlks assigned to a storage controller is dependent on the release of
Data ONTAP. BUG 223732 introduced a calculation that is done in order to allow for the
number of pBlks to scale with larger storage controllers. The number of pBlks computed is as

follows:
If a storage controller's physical memory is 1024MB or less, then Data ONTAP will assign one
pBlk per megabyte (MB) of memory. This results in a total pBlk count of 1024.
If a storage controller's physical memory is greater than 1024MB and on a release with the
changes for BUG 223732 , then the calculation is as follows:
In this specific scenario, Data ONTAP will perform a calculation based on the number of disks in
the system (if it is a HA-Pair, it will be based on both the heads in the HA-Pair). The premise
here is that each disk can handle a number of operations per drive, so we can allow for multiple
outstanding CIFS operations. The calculation is:

Stand-alone system: ( num_disks *

HA-Pair: ((num_disks_Storage Controller1 + num_disks_Storage Controller2)

8 ) = ***total_pblks

* 8 ) = ***total_blks
***total_blks < or = to 1024:
***total_blks > 1024:

Data ONTAP will allocate 1024 pBlks

Data ONTAP will allocate the value that is calculated for the

number of pBlks
This calculation is based on all the physical disks (including LUNs representing physical disks
presented to V-Series Storage Controllers). If the storage controller has multiple paths to disk,
whether MPHA or through multiple HBA's (that is, V-Series), the disk is only counted once.
As part of troubleshooting, storage controller statistics will be gathered around the time of pBlk
exhaustion. The four main stats to be focused on are as follows:

exhaust_mem_ctrl_blk_cnt : This reports the number of times that pBlks have

been exhausted since the last time the stat was cleared. If this number is not
increasing when looking at the stat, then it is just historical and you are not
having an issue at the time of data collection. Typically, this number should
not increase by more than 100 over a 10 second period.

exhaust_mem_ctrl_blk_reserve_cnt: This reports the number of times when all

the pBlks were exhausted from the primary pool and Data ONTAP attempted
to allocate from the reserve pool and was unable to obtain one. The reserve
pool is meant to handle Vscan requests or SMB2 interim responses.

max_auth_qlength: This reports the maximum number of authentication

requests that were in a waiting state to be processed.

max_offload_qlength: This reports the maximum number of requests that are

waiting in this queue that requires off-the-box processing. The only requests
queued here are for Vscan and Fpolicy.
Procedure

Collect the following data:


FILER_CLI> priv set diag; cifs stat -z
FILER_CLI*> stats show -i 1 cifs_stats:vfiler0:exhaust_mem_ctrl_blk_cnt
cifs_stats:vfiler0:exhaust_mem_ctrl_blk_reserve_cnt
cifs_stats:vfiler0:max_auth_qlength cifs_stats:vfiler0:max_offload_qlength

This is done to obtain a sample data over a set period of time to help determine the next steps.
The data collection can be executed manually or the commands can be placed into a simple RSH
script (or script of your choice) to automate the data collection. The output of the commands,
whether run through a script or manually, needs to be captured to an output file. A sample of the
output of the command is here:
FILER_CLI*> stats show -i 1 cifs_stats:vfiler0:exhaust_mem_ctrl_blk_cnt
cifs_stats:vfiler0:exhaust_mem_ctrl_blk_reserve_cnt
cifs_stats:vfiler0:max_auth_qlength cifs_stats:vfiler0:max_offload_qlength
Instance
vfiler0
vfiler0
vfiler0
vfiler0

exhaust_mem_
0
0
0
0

exhaust_mem_
0
0
0
0

max_auth_qle
3
3
3
3

max_offload_
4
4
4
4

Once the data has been collected, compare it to the scenarios outlined below, and then select the
one with the closest match.
Scenario #1: Clients indicate that just connecting (mapping) to shares is slow or failing. The
output of the stats collected displays the following:
Instance
max_offload_
vfiler0
vfiler0
vfiler0
vfiler0

exhaust_mem_

exhaust_mem_

153759
153760
153820
153830

0
0
0
0

345
350
350
351

max_auth_qle
4
4
4
4

The indicator in this scenario is that max_auth_qlength is high and max_offload_qlength is


low. This indicates the storage controller is not exhausting pBlks as a result of Vscan or Fpolicy
because the offload queue here is low. The high max_auth_qlength, along with pBlk exhaustion
means that slow authentication must be investigated. For more information, see article 1013398:

How to troubleshoot pBlk exhaustion due to Slow Authentication


Scenario #2: Clients indicate that file operations (reads and writes) are slow or failing. The
output of the stats collected displays the following:
Instance
vfiler0
vfiler0
vfiler0
vfiler0

exhaust_mem_
153759
153760
153820
153830

exhaust_mem_
0
0
0
0

max_auth_qle
3
3
3
3

max_offload_
50
50
55
60

The indicator in this scenario is auth_qlength is low but offload_qlenth is high. In this
scenario, the focus of troubleshooting needs to be on Vscan and/or Fpolicy. A low auth_qlength
is an indication of a low number of authentication requests. The impact on clients will be file
access issues, namely slow read and write files operations. See and work through the following
articles in this order:

1013401: Data ONTAP 7 or Data ONTAP 8-7 Mode: How to troubleshoot pBlk
exhaustion due to Vscan server

1013400: How to troubleshoot pBlk exhaustion due to Fpolicy Server

Scenario #3: Clients indicate that both file operations and mapping of drives are slow. The
output of the stats displays the following:
Instance
vfiler0
vfiler0
vfiler0
vfiler0

exhaust_mem_
153759
153760
153820
153830

exhaust_mem_
0
0
0
0

max_auth_qle
345
350
350
351

max_offload_
50
50
55
60

The indicator in this scenario is both offload_qlength and max_auth_qlength are high and
incrementing. Initially, focus on the 'Slow Vscan and Fpolicy' and work through the following
articles in this order:

1013401: Data ONTAP 7 or Data ONTAP 8-7 Mode: How to troubleshoot pBlk
exhaustion due to Vscan server

1013400: How to troubleshoot pBlk exhaustion due to Fpolicy Server

It is likely that the Fpolicy or Vscan server is the primary contributor for exhausting pBlks and
therefore causing the authentication queue to back up. If this turns out not to be the case, then see
the article, 1013398: How to troubleshoot pBlk exhaustion due to Slow Authentication

Scenario #4: Clients indicate that file operations are slow. For example, the storage controller is
acting as a storage back end for a Web farm and Web pages are slow to load. The output of the
stats displays the following:
Instance
max_offload_
vfiler0
vfiler0
vfiler0
vfiler0

exhaust_mem_
153759
153760
153820
153830

exhaust_mem_
100
100
110
115

max_auth_qle
3
3
3
3

4
4
4
4

The indicators in this scenario are that both offload_qlength and max_auth_qle are low and
are not incrementing, but pBlk exhaustion is occurring and possibly pBlk Reserve Exhaust are
incrementing. Fpolicy and Vscan along with Authentication are not contributing to pBlk
exhaustion, this may be a result of natural pBlk exhaustion, where the incoming workload is
consuming system resources (pBlks) at a rate faster than can be freed and reused. Should this
condition arise, please contact NetApp Technical Support to open a support case. Additional
troubleshooting and data collection may be required with the guidance of NetApp Technical
Support to resolve this issue.
Note: The KB 2011283 is replaced by KB 1013397
@@@@@@
How to troubleshoot CIFS client access issues caused by oplock delayed breaks

KB Doc ID 1011940 Version: 9.0 Published date: 07/20/2016 Views: 16711

Description

Oplock delayed breaks are generally noticed in the storage system message log after end users
report 'Access Denied' errors or general connectivity issues to the storage system. The error
itself is not indicative of an issue with the storage system. In fact, the storage system is reporting
an issue associated with the client. To understand the oplock delayed break message, it is
important to understand how oplocks work.
The general flow of oplocks is as follows:
1. Client1 opens a \\storage system\share\file1 requesting a batch or
exclusive oplock

2. Storage system responds with a batch or exclusive oplock for file1 to Client1
3. Client2 attempts to open \\storage system\share\file1 requesting a batch
or exclusive oplock
4. Storage system holds off on the open request to Client2 and sends an Oplock
Break Request to Client1 requesting it to flush all its locks
5. Client1 responds to the Oplock Break Request flushing its cache
6. Storage system grants the open to Client2 with the appropriate lock

In the above example, in Step 4 when the storage system sends an Oplock Break Request to
Client1, a 35 second timer is started. If Client1 does not respond to the Oplock Break Request in
35 seconds, the storage system does three things:
1. Log an Oplock Delayed Break message that includes the IP Address of the
offending client to the syslog
Example:

Sun Nov 1 09:51:29 CET [srv123@ntap1:cifs.oplock.break.timeout:warning]:


CIFS: An oplock break request to station <IP>()

2. Forcefully clean up any locks associated on the file for Client1


3. Grant the open response to Client2

Since Oplock Delayed Breaks are indicative of issues with the client, troubleshooting efforts
should be focused on the client. There are three common reasons why a client will not respond to
an oplock break request:
1. The client rebooted abnormally (such as blue screened) and therefore no
longer believes it holds a lock on the file.
2. The client has too many open connections to the storage system and
therefore cannot respond to the Oplock Break Request.
3. There is network connectivity issues between the client and the storage
system inhibiting the client from receiving the Oplock Break Request.
Procedure

Scenario 1
The client rebooted abnormally (such as blue screened) and therefore no longer believes it holds
a lock on the file.

If the client rebooted abnormally, the fastest approach to finding out is to examine the System
Log on the client for and Event associated with the reboot. In general, it will be an Event ID
6008. The Event ID 6008 is not always logged. You might also see an Event ID 6009 or 6005
reporting the system rebooting and event log service started. If the times in the event log line up
with the delayed break messages on the storage system, then a reboot was the cause.
Scenario 2
The client has too many open connections to the storage system and therefore cannot respond to
the Oplock Break Request.
To operate more efficiently, Windows Clients combine many CIFS sessions into the same TCP
connection; these connections are known as multiplexed connections. By default, the Windows
Client will limit the number of multiplexed connections for a server to 50. If a client has 50
multiplexed connections currently open and the storage system sends a Oplock Break Request to
the client, the client will not be able to send a response back to the storage system until one of
the multiplexed connections are closed down. An easy way to see if this is happening is to
examine the section of "cifs stat" where Max Multiplex is listed. If the Max Multiplex number is
equal to the options cifs.max_mpx number, then it is likely the culprit. An example of
troubleshooting this scenario would be:
1. Run cifs stat -z (to clear the cifs state counters)
2. Run cifs stat repeatedly leading up to and after the Oplock Delayed Break
message is reported
3. Examine if the Max Multiplex number equals the options cifs.max_mpx
number

The steps to troubleshoot this scenario are usually taken when Citrix Servers, Terminal Service
Servers, IIS Web Servers, or Windows Applications report failures around the time the Oplock
Delayed Break messages occur. If this scenario matches the cause of the issue, or adjusting the
Multiplex settings on both the storage system and Windows Client, see 3011249: What is CIFS
Max Multiplex and how to increase maximum number of simultaneous outstanding Windows
client requests that a controller allows?
Scenario 3
There are network connectivity issues between the client and the storage system inhibiting the
client from receiving the Oplock Break Request.
Packet traces will be required to troubleshoot this issue. The goal of the packet traces is twofold:

1. Determine if the storage system is sending the Oplock Break Request.


2. Determine if the client received the Oplock Break Request.

In order to troubleshoot the issue with packet traces, it will be required to obtain matching packet
traces from the storage system and the client that the storage system is reporting in the Oplock
Delayed Break message. It is also important to note that it takes 35 seconds from the time that
the second client opens the file for the storage system to time out the Oplock Break Request for
the original client. The packet trace flow will look like this:
1. Start the packet trace on Client1
2. Start the packet trace on Storage system
3. From Client1 open \\storage system\share\file1
4. From Client2 open \\storage system\share\file1
5. Wait 35 seconds for the Storage system to log the Oplock Delayed Break
6. Stop the packet trace on Client1
7. Stop the packet trace on Storage system

The Wireshark filter you will use to find the Open of File1 by Client1 and Client2 along with the
Locking Request send to Client1 will be in the following format (note: replace file1 with the
name of the file you are working with):
smb.cmd == 0x24 or smb.file contains "file1"

Related Links:

BUG 273616

BUG 69976

BUG 12483

3013892: FAQ: Top 10 CIFS Issues and Solutions

@@@@@@@@@
How to troubleshoot CIFS file access issues when Vscan is involved on Data ONTAP
7G

KB Doc ID 1013399 Version: 6.0 Published date: 06/20/2014 Views: 14683

Description

The storage controller is capable of interacting with an anti-virus (AV) server to help customers
avoid a virus infecting the data on a NetApp Storage Controller. This interaction with anti-virus
servers presents potential challenges when responding to client requests for data access. This
article will provide solutions or point to existing KB articles that contain further details. Before
going into the various scenarios, this article will cover, at a very high level, how antivirus
interacts with the storage controller during read and write operations initiated by clients. The
first overview is how a read operation flows, when antivirus is configured and active on the
storage controller.
The general flow of a CIFS operation when Vscan is involved when a file is read is as below:
1. Client1 has a drive mapped to the storage controller and opens up fileA.rtf.
2. Storage controller checks the inode to determine if fileA.rtf needs to be
scanned. There is a flag in the inode that indicates if the file needs to be
scanned. For this example, assume the file needs to be scanned.
3. Storage controller issues RPC request to the AV server requesting a file be
scanned.
4. AV server then connects to the storage controller over a special hidden share,
ONTAP_ADMIN$, to retrieve some or the entire file to check for a virus.
5. AV server sends RPC with response: Ok, not Ok (clean or not clean).
6. Storage controller marks the flag in the inode for the file that says it has been
scanned.
7. Storage controller responds to the clients initial read request appropriately
given the response in step 5.

In this scenario, as you can see the clients request is not answered until the file is scanned by the
AV server. Depending on the speed of the AV server to accept, retrieve and scan the files, it could
have an impact on the response to the clients request to read a file. The clients request is not
satisfied until the scan is completed.
The general flow of a CIFS operation when Vscan is involved and a file is written to is as below:

1. Client1 has fileA.rtf opened and issues a write to the file, then closes the
handle.
2. Storage controller acknowledges and responds to the client for the write
operation.
3. Storage controller sends RPC call to the AV server indicating a need for a file
to be scanned.
4. AV server then connects to the storage controller over a special hidden share,
ONTAP_ADMIN$, to retrieve enough of the file to check for a virus.
5. AV server sends an RPC response to the virus scan operation.
6. Storage controller sets a flag in the inode for the file indicating it has been
scanned.

The difference here is that the client operation is acknowledged prior to the request sent to the
antivirus server. This is a contrast to how a read operation is acknowledged. As you can see in
both of the above scenarios, there are several things going on. We have RPC connections on both
the AV server and the storage controller and the AV server connects to a special hidden CIFS
share to retrieve all or part of the file to scan. Listed below you will find some of the most
common issues associated with CIFS and AV scanning.
Procedure

Scenario 1: Client is complaining that files are taking a long time or fail to open
This could be the result of virus scans taking a long time to complete. A client file operation that
needs to be scanned might not be satisfied by the storage controller until either of the following:
1.

The AV server replies with Ok or Not Ok otherwise known as clean or


something other than clean.

2. The timeouts configured for Vscan options in Data ONTAP are exceeded.

The following can help to resolve situations where files fail or are slow to open when virus
scanning is involved:

Ensure the recommended Vscan timeouts are set as outlined in the article
3011812: What is the recommended value for the filer setting " vscan options
timeout"?. This article covers the storage controller and supported AV vendor
applications and how the respective timeouts should be set.

Review the Data ONTAP Vscan option mandatory_scan using the storage
controller CLI: vscan options <enter>.

If the mandatory_scan option set to on, then the file operations where a scan is initiated will
require a response by the AV server to the scan request. If the AV server(s) is/are slow and the
Vscan option abort_timeout is exceeded or all the AV servers become disconnected, then the
file operations that need virus scanning will be responded to as access denied.
If you are having issues with your AV server staying connected, a quick workaround until you
can resolve the issue would be to turn mandatory_scan to off through Vscan options
mandatory_scan off .
Caution: Ensure you have sufficient AV protection outside the storage controller to provide
data protection. See Scenario 2 if you are having difficulty with the AV servers staying
connected.
Scenario 2: The storage controller is disconnecting from the AV server and reports any of the
below messages (Data ONTAP reports message involving the AV environment to
/etc/messages):
1. Error message: [CIFSVScan:warning]: Virus Scanner: connectToServer
\\servername failed [0xc000005e]

Solution #1: This could be the result of the RPC pipe on the AV server not
allowing the storage controller to establish a connection.
See the following KB articles to work through ensuring access to the RPC pipe
is properly configured:
2011317: Error message: [CIFSVScan:warning]: Virus Scanner:
connectToServer \\servername failed [0xc000005e]

Solution #2: Ensure the storage controller can fully resolve the AV server
name. If the storage controller cannot resolve the FQDN of the AV server, it
can report the 0xc000005e message.

Solution #3: If the storage controller has multiple interfaces configured with
IPs in the same subnet, it can cause an issue when the storage controller
initiates a conversation with the Vscan server. The AV server may initiate a
conversation to the storage controller on filer_interfaceA but then the
storage controller might initiate a conversation back to the AV server over
filer_interfaceB. This confuses the AV server and it will likely not answer.
This then causes the storage controller to report the 0xc000005e error. Data
ONTAP will then teardown the conversation established by the AV server on
filer_interfaceA

2. [storage controller: vscan.dropped.connection:warning]: CIFS: Virus scan


server \\AVServerA (x.x.x.x) has disconnected from the storage
controller: You will need to investigate this further to know the reason
for the disconnect:

Solution #1: Do you see a message at or around the same time indicating
that the AV server version changed? If so, the disconnect is expected as the
services are restarted on the AV server after the virus definitions are updated.
This causes the AV server to disconnect from and reconnect to the storage
controller, in order for the new definition files to be read. This also causes all
the inode flags that previously indicated a file had been scanned, to be reset.
Why, because the AV server has new virus definitions and files will need to be
scanned against the new definitions. Once scanned on the new definitions,
the flag will be set again.

Solution #2: Is the message preceded by a large number of Too many


consecutive virus scan requests timed out. If so, then see the next
section Disconnect due to too many.. When the storage controller
disconnects due to too many timed out requests, the only connection torn
down is the RPC connection from the storage controller to the AV server. The
AV server will not know until it makes its next request over the RPC
connection it maintains with the storage controller. At this time, the storage
controller will reply to the AV server with an RPC code that says I am not
listening anymore. The AV server will then disconnect and reconnect

3. [storage controller: vscan.server.requestTimeout:error]: CIFS: Virus

scan request to 10.10.10.10 for file ONTAP_ADMIN$\vol\vol0\file.msg


timed out. Too many consecutive virus scan requests timed out, breaking
connection.

Solution: If ONTAP receives too many consecutive timeouts for requests sent
to a particular AV server, it will assume that the particular AV server is having
an issue. Data ONTAP will disconnect the AV server and report the above
message. Any queued scan requests for that AV server will either be requeued for other configured AV servers or the responses to file operations
dispatched based on the mandatory_scan setting.
For more information, see article 2012373: Virus scan times out with an error
message: CIFS: Too many consecutive virus scan requests to server timed
out, breaking connection

Scenario 3: Client is complaining about file access and the storage controller is reporting that
scans are timing out: [storage controller: vscan.server.requestTimeout:error]:
CIFS: Virus scan request to 10.10.10.10 for file
ONTAP_ADMIN$\vol\vol0\mail.msg timed out.

Storage controller will retry this request on

other vscan servers, if possible.


Solution: This message indicates that a file being scanned has exceeded the Vscan option for
abort_timeout configured in Data ONTAP. If that timer has been reached and there are no other
AV servers to send the request to, the storage controller will look at the mandatory_scan option
to determine how the file operation should be responded to, see 'Scenario #1'. If there is more
than one AV server connected to the storage controller, then the timed out request will be tried on

the next available AV server. If too many requests time out, an AV server will be queued for
disconnect, see 'Scenario 2'.
Scenario 4: Storage controller reports that a virus has been found, does this mean that it has been
the victim of a virus infection? You may see messages similar to the following: [storage
controller1: vscan.virus.created:ALERT]: CIFS: Possible Virus Detected - File
ONTAP_ADMIN$\vol\vol1\somespreadsheet.xls in share some_share$ modified by
client 192.168.1.8 (**unknown**) running as user S-1-5-21-1234567890123456789-2222222222-111222 may be infected. The storage controller received
status message Error 5
...Continuing and error code [0x5] from vscan
(anti-virus) server 192.168.100.100.

Solution: You need to look at the message closely. The occurrence of a vscan.virus.created
message does not mean that a virus was found. As the above example shows, a virus was not
found, rather an error code of 0x5 was returned. Data ONTAP only knows ok or not ok...file
clean or something other than clean. If the response from the AV server is anything but OK/file
clean, then it is assumed to be a virus EVEN if it is not a virus. If there was a virus found during
the scan, the error message displayed by the storage controller would indicate the name of the
virus. As the above shows, it is actually an error from the AV server and the error message
written states may be infected. Work with the AV software vendor when the message states
anything other than virus found and the virus name. The message above is created and
populated with information sent back from the AV server on completion of scan.
Scenario 5: Storage Controller is reporting messages indicating completion request lost:
Solution: This is a situation when a request sent to the AV server timed out (abort_timeout
reached), and the storage controller cancelled the request. Then, sometime after the request was
cancelled on the storage controller, the AV server replied that it finished the scan. The request is
lost because Data ONTAP moved on and no longer has a record that a request was sent to the
server responding. Once a scan in progress has exceeded the abort_timeout, the storage
controller will perform one of these two:

Re-submit the request on any other configured AV server

-OR

If there are no other AV servers configured for the storage controller, the
option for mandatory_scan will be consulted and the file operation will be
responded to accordingly. If the AV server in the interim replies, the storage
controller will report a completion request lost message. The storage
controller when it moves on, after a scan request reaches the abort_timeout
value, does not inform the AV server to stop scanning.

If you still need assistance after reviewing the above scenarios, proceed with the following:
1. Open a case with the AV vendor. Many times, a case involving external AV
servers will need to be investigated as a joint effort between NetApp Support
and the AV software vendor.
2. Open a case with NetApp Technical Support:

Generate an AutoSupport from the affected storage controller.

Have the AV vendor name and version available.

@@@@@@@@@@
How to terminate CIFS in Data ONTAP 7-Mode for a particular volume or close a
specified CIFS session in clustered Data ONTAP 8.3 without affecting other CIFS
accessibility

KB Doc ID 1011064 Version: 14.0 Published date: 02/26/2016 Views: 19909

Description

In Data ONTAP 7-Mode:


How to terminate CIFS for a particular volume without affecting other volumes' CIFS
accessibility.
Unable to take a volume offline if CIFS sessions are active on the CIFS share.
Run the 'vol offline' command, which reports the following:
vol offline: cifs open files prevent operation

In Clustered Data ONTAP 8.3


How to close an open CIFS session using different parameters.
Procedure

Data ONTAP 7-Mode:


To terminate CIFS only on a particular volume (vol1 is used as an example):
filer> cifs terminate -t 5 -v vol1
Total number of open CIFS files by sessions using volume vol1: 1
Warning: Terminating CIFS service while files are open may cause data

loss!!
5 minute left until termination (^C to abort)...
Sat Oct 14 14:19:50 PDT [rc:error]: CIFS terminated for volume vol1 with 1
open file
Share vol1 disabled until cifs reactivated for volume vol1
filer> vol offline vol1
Sat Oct 14 14:20:53 PDT [config_async_0:notice]: CIFS - disabled for
volume vol1
Volume 'vol1' is now offline.

To restart CIFS on a specific volume (for Data ONTAP 6.5 and later):
filer> cifs restart -v vol1

Note: Data ONTAP 7.0.X and later will ask the user to terminate CIFS when the vol offline
command is run. For example:
filer7g> vol offline source
Total number of open CIFS files by sessions using volume source: 1
Warning: Terminating CIFS service while files are open may cause data
loss!!
Enter the number of minutes to wait before disconnecting [5]: 0
Sat Oct 14 17:09:35 EDT [cifs.terminationNotice:warning]: CIFS: shut down
completed: CIFS terminated for volume source with 1 open file.
Share source disabled while volume source is offline.
Sat Oct 14 17:09:36 EDT [wafl.vvol.offline:info]: Volume 'source' has been
set temporarily offline
Volume 'source' is now offline.

Additional information:
Note: If CIFS is terminated for a volume instead of the system, any volume renames will be
reflected in CIFS shares. For example,
Beginning state: The share 'testCIFS' originally points to the path c:\vol\testCIFS
Scenario 1 (terminate CIFS on a volume):
Filer01> cifs terminate -t 0 -v testCIFS
Filer01> vol rename testCIFS new_testCIFS
Filer01> cifs restart -v new_testCIFS

Result: the share 'testCIFS' is changed to point to the path c:\vol\old_testCIFS


Scenario 2 (terminate CIFS on the system):
Filer01> cifs terminate -t 0
Filer01> vol rename testCIFS old_testCIFS

Filer01> cifs restart

Result: the share 'testCIFS' is not changed. It will attempt to point to the path
c:\vol\testCIFS. If a path by that name does not exist, the share will not be restored and an
error will be reported to the console.
Clustered Data ONTAP 8.3: (note this is a new 8.3 feauture not available in 8.2)
The vserver cifs session show command displays information about established
CIFS sessions
You can use different parameters to close CIFS sessions shown as below:

To close all the CIFS sessions on a specified node:


Cluster::> vserver cifs session close -node <nodename>

To close all the opened CIFS sessions on the specified CIFS-enabled Vserver:
Cluster::> vserver cifs session close vserver <vserver name>

To close the open CIFS session that matches the specified session ID:
Cluster::> vserver cifs session close session-is <Session ID>

To close all the opened CIFS sessions that matches the specified connection ID:
Cluster::> vserver cifs session close connection-id <Connection ID>

To close all the opened CIFS sessions that are established through the specified data
LIF IP address:
Cluster::> vserver cifs session close lif-address <Incoming Data LIF IP
Address>

To close all the opened CIFS sessions that are opened from the specified IP address:
Cluster::> vserver cifs session close auth-mechanism <Authentication
Mechanism>

Example:

NTLMv1 - NTLMv1 authentication mechanism

NTLMv2 - NTLMv2 authentication mechanism

Kerberos - Kerberos authentication mechanism

Anonymous - Anonymous authentication mechanism

To close all the opened CIFS sessions that are established for the specified CIFS
user:
Cluster::> vserver cifs session close windows-user <Windows User>

The acceptable format for CIFS user is [domain]\user.

To close all the opened CIFS sessions that are established for the specified UNIX
user:
Cluster::> vserver cifs session close unix-user <Unix User>

To close all the opened CIFS sessions that are established over the specified version
of CIFS protocol:
Cluster::> vserver cifs session close protocol-version <Protocl Version>

Protocol versions include:

SMB1 - SMB 1.0

SMB2 - SMB 2.0

SMB2_1 - SMB 2.1

SMB3 - SMB 3.0

To close all the opened CIFS sessions with open files that have the specified level of
continuously available protection:
Cluster::> vserver cifs session close continuously-available <CIFS Open File
Protection>

To close all the opened CIFS sessions that are established with the specified SMB
signing option:
Cluster::> vserver cifs session close is-session-signed {true|false}

@@@@@@@@@@
How to add a user to Common Internet File System (CIFS) protocol shares within a
workgroup

KB Doc ID 1011622 Version: 7.0 Published date: 05/20/2015 Views: 3866

Description

How to grant access to a share on the controller to users within a workgroup without giving them
root access to the controller.
How do you grant restrictive permissions without a domain?
Pass through authentication.
Procedure
1. Start by creating a user on the controller:
controller> useradmin user add [newuser] -g [group]
Where [newuser] is the account that will be used on the Windows system
and [group] is the required access group on the controller.

Input the desired password and retype to confirm. The password on the controller must
match the one on the Workgroup Windows system.
A confirmation of User [newuser] added should appear once this is complete.
2. Then, on the Windows server, open MMC and connect to the controller as its
local admin.
3. Go to SystemTools > Local Users & Groups > Groups.
4. Create a new group and add the user to this group.
5. Verify the changes by reading the /etc/lclgroups.cfg file, looking for entries
consistent with the user and group created above.

Note: Any local user created on the controller prior to Data ONTAP 7.x has root access.
@@@@@@@
How to set up CIFS auditing on the controller

KB Doc ID 1011243 Version: 15.0 Published date: 03/16/2016 Views: 59413

Description

This article describes the procedure to set up the Common Internet File System Protocol (CIFS)
auditing work on the controller. It also describes the reason why there are lots of small audit files
when the log size is set to a larger number.

Procedure

CIFS auditing does not function like traditional event auditing on a Windows client. The
controller stores audit events in temporary memory until the log reaches a preset or user defined
threshold. Once this threshold has been exceeded, the temporary file is written in a standard EVT
format file that can be viewed using an auditing tool. Due to the nature of this format, EVT
generation events cannot be seen in real time.
For a closer view of the real time event, LiveView must be used. Before Data ONTAP 7.2.2 /
Data ONTAP 7.3RC1, LiveView will only be able to display 1000 events per EVT log and each

EVT log is written once a minute. This setting is hard-coded into LiveView and cannot be
changed. For more information, see BUG 217215
Note: Before Data ONTAP 7.2.2 / Data ONTAP 7.3RC1, LiveView supersedes custom save
triggers. You can either have a custom save log, or you can use LiveView. You cannot use a
custom save log and LiveView at the same time. If LiveView is used, a log file will be written
every minute. LiveView will only retain 1000 audit entries per one minute file.
Starting with Data ONTAP 7.2.2, the number of entries was raised to 5000, and LiveView can be
enabled together with cifs.audit.autosave options, which control the size of the internal
audit file and how it is saved. See File Access and Protocols Management Guide for more
information.
For more information to run CIFS auditing on clustered Data ONTAP, see article 1015035: How
to set up CIFS auditing with clustered Data ONTAP
To enable CIFS auditing on the controller, run one of the following commands:
Filer> cifs audit start

-ORFiler> options cifs.audit.enable on

These commands control how often the audit log will be written. When enabled for the first time,
the audit log will be written once a day, or when it becomes 75% full (384 MB). This is the
default setting. Several options can be used to control how often the log files are written. To set
an option, use the following syntax:
options.cifs.audit.[option] [variable]

To set up additional items that will be audited, you need to configure specific audit rules for each
share or qtree:
1. In Computer Manager, go to qtree or the folder that you need to audit.
2. Select the Security tab , then the Advanced tab, and select Auditing.
3. Specify the groups and events to be audited.

The options are outlined as follows:

cifs.audit.autosave.onsize.enable [on/off]

Determines if the log will be saved when a certain size threshold is reached.

cifs.audit.autosave.onsize.threshold [%,k,m]

Sets the size threshold for the save onsize. This can be a percentage of the
entire log size, or a prefixed size that is smaller then the total log size. It is
recommended to give the log at least 20% overhead to handle any events
that occur during the write.

cifs.audit.autosave.ontime.enable [on/off]

Sets how often the log will be saved. Be aware that if this command is used
without the onsize setting, then log wrapping can occur if the log file is set to
be too small.

cifs.audit.autosave.ontime.interval [m,h,d]

Sets how often the log is saved with on time. The maximum time that can be
set is 7 days.

To configure the attributes of the log file, use the following options:

cifs.audit.logsize 524288 - 68719476736

Sets the total size of the log file in bytes.

Warning: be cautious of setting the log file size high and then
setting cifs.audit.liveview.enable to on. The conversion
process of the log file to .evt format can cause CIFS
performance penalty.

cifs.audit.autosave.file.limit 0-255

Sets how many EVT files will be kept on the controller. Changing this option to
0 will disable this feature, allowing to keep as many EVT files as storage
allows.

cifs.audit.autosave.file.extension [counter/timestamp]

Setting this option will save each log file with either a time stamp of the time
the log was written, or with a series number.

cifs.audit.saveas [path]

Sets a default location for a manually triggered log save, along with the file
name.

To configure the log to be viewed in real time, use the following options:

options cifs.audit.liveview.enable [on\off]

Note: LiveView supercedes custom save triggers. Either have a custom save log, or use
LiveView. You cannot use a custom save log and LiveView at the same time. If LiveView
is used, a log file will be written every minute. LiveView will only retain 1000 audit
entries per 1 minute file.

The following options customize what is recorded in the audit log:

cifs.audit.account_mgmt_events.enable [on/off]

Sets whether the audit log will record changes to user or share account
management on the controller.

cifs.audit.file_access_events.enable [on/off]

Sets whether the audit log will record file access events on the controller
through CIFS.

cifs.audit.logon_events.enable [on/off]

Sets whether the audit log will record user, machine or domain login events
through CIFS.

cifs.audit.nfs.enable [on/off]

Sets whether the audit log will record NFS events. Only those events that are
specified by the Windows SACLs will be recorded.

cifs.audit.nfs.filter.filename [name]

Location on the controller that contains the list of NFS filters. The NFS filters
determine which files will be written to the log.

Why do I have lots of small audit files when I set the log size to a larger number?
LiveView is enabled for auditing and it is hard coded to save every minute, up to the maximum
number of allowed files as set by cifs.audit.autosave.file.limit. In order to view events
over the course of several minutes, you need to use a third party auditing viewer. By default, the
Windows event viewer is only able to view single log entry files. If you require to have only one
log file of a certain size, or only one log file over a span of time, disable LiveView:
Filer> cifs.audit.liveview.enable off

Why is the CIFS Audit Log Empty?


Follow these steps to troubleshoot:
1. Confirm that CIFS auditing is enabled:
Filer> options cifs.audit.enable on

2. Confirm that the audit log is being saved to a volume or qtree that is
setup for the NTFS security style. Also, confirm that the proper ACLs have
been applied to the directory where the audit log is being saved
(BUILTIN/Administrator/ Full Control).
@@@@@@@@@@@@
What is the filer's default CIFS TCP window size?

KB Doc ID 3010838 Version: 11.0 Published date: 05/04/2015 Views: 16469

Answer

The default CIFS TCP window size on the controller is 17520 bytes, but can be increased to
8387700 bytes. To view the current setting, run the following command:
Filer> options cifs.tcp_window_size

For more information, on how to force the Windows 2000/XP client to use larger window sizes
refer to CIFS TCP Window size is limited to 64240 bytes.
Note: CIFS should be terminated and restarted for the option cifs.tcp_window_size to take
effect. The TCP window size is only set when CIFS registers with the network code. That
happens when CIFS starts or when a new name is registered as a NetBIOS alias. If you are not
restarting CIFS, only the newly registered names will have the changed window size. Note that
when registering a NetBIOS alias, CIFS looks at the controller's registry key
options.cifsinternal.aliaswindowsize to get the value to use. This value is updated
whenever a NetBIOS alias is created and it uses the then-current value of
cifs.tcp_window_size. This registry value is maintained separately from the current value of
cifs.tcp_window_size.
Also see the performance section of your respective Data ONTAP System Administrator's Guide
additional information. See SCON/SN.0/TCPAutoTune for more information.
Related Link:

3013918 What is the default, minimum, maximum, and recommended value


for cifs.tcp_window_size in Data ONTAP 7-Mode?

Note: In clustered Data ONTAP, the maximum TCP window receive size has been pre-set to
7631441 bytes.
@@@@@@@@@2
How to restrict Common Internet File System Protocol (CIFS) host access with the
/etc/usermap.cfg file

KB Doc ID 1010514 Version: 5.0 Published date: 06/20/2014 Views: 6780

Description
Procedure

The IP qualifier can be used to restrict host access, although it is intended to address security
concerns in multi-protocol environments because of possible automatic conversion of Network
File System (NFS) UIDs to NT SIDs, without being authenticated by an NT Domain Controller.
Use an IP qualifier in the /etc/usermap.cfg file to restrict CIFS access by host and/or
network. The syntax is as follows (See the usermap.cfg manual page for additional options):
IP_or_hostname_or_network:domain>\* => ""

Sample /etc/usermap.cfg entries:

To restrict all hosts in network 10.10.10.0, netmask 255.255.255.0, in the


netapp domain from accessing the filer's CIFS shares:
10.10.10.0/24:netapp\* => ""

To restrict host client in the netapp domain from accessing the filer's CIFS
shares:
client:netapp\* => ""

The host and subnet map restrictions may be confusing to Windows NT users in that filer's shares
can still be seen by restricted hosts and networks
@@@@@@@
What is the recommended tuning for Windows Terminal Server (WTS) CIFS clients?

KB Doc ID 3010046 Version: 5.0 Published date: 06/20/2014 Views: 7061

Answer

Slow network performance with Windows Terminal Server.


Error message: STATUS_RANGE_NOT_LOCKED

Windows 2000-based clients that attempt multiple simultaneous long-term requests against a file
server might receive error code 56 "The network BIOS command limit has been reached",
even if larger MaxCmds or MaxMpxCt values have been specified in the registry

For the performance reason, you may need to increase the "cifs.max_mpx" to 1124 while using
the storage system with a Windows Terminal Server (WTS).
Note: Before taking the max_mpx up to 1124, it is recommended to check the current
max_mpx.
Steps to modify the cifs.max_map value from the storage system:
1. Terminate CIFS: cifs terminate.
2. Enter options cifs.max_mpx 1124.
@@@@@@@@@@
What are the tools available for migrating data from Windows server or Common
Internet File System protocol (CIFS)-only filer to another filer?

KB Doc ID 3010045 Version: 6.0 Published date: 02/01/2016 Views: 5234

Answer

Some tools used to migrate Windows data to the filer are:


Windows Resource Kit utilities: scopy (Windows NT 4.0), xcopy, permcopy (Windows 2000)
and ROBOCOPY (see 1011065).
Note:
Permcopy preserves share-level ACLs but does not copy NTFS file permissions and is best used
to duplicate shares on multiple Windows 2000 hosts where you have elaborate share permissions
that are difficult to duplicate manually. See your Windows Resource Kit documentation for
details.
The latest versions of Bindview and NetIQ are known to work best with the filer. They can be
used to migrate users and shared resources
Example of SCOPY:
To use SCOPY to migrate data from Windows to a filer, or from a CIFS-only filer to a filer.
Note:

SCOPY may skip files that are open. Be sure all files are closed during the data migration.
To use SCOPY for migrating between shares and preserve the original file permissions:
1. Obtain SCOPY from the Windows NT Server 4.0 Resource Kit
2. Use SCOPY to copy shares between shares while preserving the NTFS style
permissions

Usage:
scopy \\server\share\path \\filer\path /s /o /a

Explanation of options:
/s copies all files in subdirectories.
/o copies owner security information
/a copies auditing information
Note:
While file security will be preserved, share level security will not be preserved when copying a
share.
SCOPY can only be used to copy files with NTFS security. UNIX style permissions will not be
preserved.
@@@@@@@
How to move files from file server to filer using Robocopy to retain Windows ACLs

KB Doc ID 1011065 Version: 5.0 Published date: 06/20/2014 Views: 5816

Description

Copy files from a file server to a NetApp filer using Robocopy 10 to retain Windows
permissions.
Caution
Robocopy is not a NetApp tool, please engage Microsoft for assistance with the tool. The
content of this article is a courtesy for NetApp customers, and not a definitive guide to data
migration.

Procedure

Robocopy allows users to retain Windows file permissions when copying to a filer. To copy files
from a Windows server to a NetApp filer, use the following command syntax:
robocopy source destination /S /SEC
To retain the current ownership setting of the files being copied/moved, the following switch
needs to be used with the ROBOCOPY command.
ROBOCOPY /S /COPYALL or ROBOCOPY /S /COPY:DATSO
In order to keep the filer in sync during a pilot phase of a migration, you can also put the
following into a script and schedule it to run each hour on the Windows source server:
robocopy <sourcedir> \\filer\share /copyall /mir /zb /r:1 /w:3
@@@@@@@
How to configure Common Internet File System protocol (CIFS) client-side caching

KB Doc ID 1010180 Version: 6.0 Published date: 06/20/2014 Views: 2278

Description

Shared folder caching policy


Procedure

Client-side caching enables Windows clients to cache files on a share so that the files are
available for offline use. Client-side caching can be specified from the filer or from a Windows
2000, XP, or 2003 client. A shared folder caching policy can be set to the following options:
no_caching

Disallow Windows clients from caching any files on this


share.

manual_caching

Allow users on Windows clients to manually select files


to be cached.
Allow Windows clients to cache user documents on this

auto_document_cachin
share. The actual caching behavior depends upon the
g

Windows client.

Allow Windows clients to cache programs on this share.


auto_program_caching The actual caching behavior depends upon the Windows

client.

Manual caching is enabled by default for new shares.


The caching policy can be set for a particular share via CLI or Computer Management
CLI:
1. Use the following syntax to change the share options:
cifs shares -change sharename
{ -comment description | -nocomment } { -maxusers userlimit |
-nomaxusers } { -forcegroup groupname | -noforcegroup } {
-nosymlink_strict_security | -symlink_strict_security }
{ -widelink | -nowidelink }
{ -umask mask | -noumask }
{ -novscan | -vscan }
{ -novscanread | -vscanread }
{ -no_caching | -manual_caching
-auto_document_caching | -auto_program_caching }

Computer Management:
1. Right click on the share and select Properties, and click Caching... under the
General tab

Note:
If the policy is manual, each client (desktop user) can turn the caching option for a mapped drive.
To do so, the client can access My Computer, right click on the folder, and select Make Available
Offline.
@@@@@@2
Triage Template - How to troubleshoot multiprotocol issues with NFS and CIFS

KB Doc ID 1013720 Version: 7.0 Published date: 07/14/2016 Views: 7796

Description

TECHNICAL TRIAGE TEMPLATE

SECTION 1: Usage
Use:

Refer this during a new case creation on the NetApp Support


site.

It is used by TSEs/Partners when a customer calls Support


directly, to help frame the issue.

Section 2 provides links to common solutions and additional


resources for the Product/Technology.

Section 3 provides information to assist with


troubleshooting.

Section 4 provides steps to gather relevant data to open a


case.

Product/Technology:
CIFS, NFS
Audience:
Customer/Partner/TSE working on this Product/Technology
Procedure

SECTION 2: Solution
Common issues/solutions:

2010717: CIFS Session Setup Error STATUS_ACCESS_DENIED

2012905: The options cifs.scopeid setting prevents CIFS


access

2013577: Cifs setup may fail with Argument list too long
when joining a Windows Server 2008 Domain

2011583: Filer stops accepting new CIFS connections

2011416: cf giveback command error message: cifs users

with open files on partner node

1010189: How to determine the suitable UNIX-style


permissions in a multiprotocol environment

Additional Help:
Customer & Partner Support Community
SECTION 3: Troubleshooting

From the host, perform an nslookup for the controller, then from the
controller, ping the host by hostname and run the following command:
filer> dns info

If any of these fail, refer to the DNS template 1012774: Triage Template How
to troubleshoot DNS issues when there could be multiple issues on the
customer's network.

From the host, verify proper permissions are set on the file or folder allowing
users to access the file.

SECTION 4: Data required for a new case


Customer - If opening a case on the NetApp Support site,
please be prepared to discuss these questions with the
Technical Support Engineer that works on your case.
TSE - Copy/Paste these questions & the customer's answers
into a private case note.

1. INPUT these questions:

What is the issue? Be specific

Was there an error?

What operation was being attempted at the time of the error?

What time/date did this error occur (so the logs can be checked later)?

When was this last working properly?

What mode of authentication is your controller using, UNIX style, NTFS style
or NIS, LDAP, any other?

What type of data object is being accessed? (CIFS share, NFS mount point)

Is there both a CIFS share AND an export in /etc/exports, for this object?

What security style is the volume and qtree being accessed?


filer> qtree status

Who is attempting to access it? (UNIX user mapped through


/etc/usermap.cfg, or a CIFS client trying to access an NFS mount, etc.)

What is the exact error message received? What is reflected in


/etc/messages?

Are all users affected the same way? Which users are affected?

Are administrators/ root users affected differently?

Did this function properly before, or is this a new configuration? If preexisting, what changed?

Were any changes made to the hosts environment (subnet, DNS, router, AD
server, domain, or OS patches)

Is there an NIS server in the environment and is the storage system acting as
a secondary NIS server?
filer> options nis

Verify that CIFS and NFS is licensed and enabled on the controller:
filer> license

Verify that CIFS is configured on the controller:


filer> cifs domaininfo

Check the NFS exports:


filer> exportfs

Verify the volumes are exported and rights are not being denied to the users.

Is the controller able to resolve the hosts in DNS, both forward and reverse?

Check the CIFS shares permissions:


filer> cifs shares
In /etc/exports, is the parent exported before the child? (That is, /vol/vol0
sec,rw appears first, and THEN /vol/vol0/qtree1 sec,rw, not the other way

around).

If opening a case on the NetApp Support site and files are

under 25 MB, please use:

For information on how to UPLOAD files via


FTP/HTTP/AsperaConnect, see KB 1010090: How to upload a
file to NetApp.

2. Please UPLOAD the requested information:

Generate a new AutoSupport:


filer> options autosupport.doit now

Send copies of /etc/passwd, /etc/nsswitch.conf, /etc/resolv.conf,


/etc/usermap.cfg and /etc/exports.

Run the following commands and send their outputs:

filer> cifs sessions

filer> wcc u <username>

filer> wcc s <username>

fsecurity show /vol/vol_name/directory_name

fsecurity show /vol/vol_name/directory_name/file_name

Collect a packet trace from the storage system during the attempted access
and send the output file to NetApp:
filer> pktt start all d /etc <hostIPaddress>
filer> pktt stop all

@@@@@@@@@@
How to recover CIFS shares

KB Doc ID 1011048 Version: 4.0 Published date: 06/20/2014 Views: 3023

Description

How to recover a storage system's Common Internet File System protocol (CIFS) shares.
Procedure

Make sure that the snapshot directory is visible.


Windows:

Look in the ~snapshot directory for a recent known good snapshot.

Locate the cifsconfig.cfg file in the /etc directory


cifs terminate

Copy the cifsconfig.cfg file to the /etc directory located in the active file
system root volume and issue the following command:
cifs restart

The CIFS shares should now be restored.


Note:
1. cifsconfig.cfg was actually split into two separate files in Data ONTAP 6.5
and 7. These files are created by cifs setup and should not be manually
edited. All changes should be made by re-running cifs setup. Re-running cifs
setup does not delete shares.
cifsconfig_share.cfg:

handles share configuration such as permissions and options


cifsconfig_setup.cfg: handles the basic cifs options such as security style
2. Other files of interest (which may at times need to be restored in such a
case):
cifs_homedir.cfg:

Specifies the location of cifs home directories. This file can be


edited directly with wordpad or vi.
cifs_nbalias.cfg: Specifies all of the netbios aliases for the filer. This file can be
edited directly with wordpad or vi.
lclgroups.cfg: This file handles the local CIFS groups. You can make changes to this
file with useradmin commands, or by connecting to the filer using the Windows MMC
for users and groups.
Caution: None of these files should be edited or copied while cifs is running.
Terminating cifs, using ndmpcopy to copy these files out of the .snapshot, and then

restarting cifs will however work to restore your previous configuration. If this does
not work for any reason, you will have to re-run cifs setup and recreate the CIFS
shares.
@@@@@@@@@@@@@@
How to grant and remove access to CIFS shares

KB Doc ID 1011517 Version: 4.0 Published date: 06/20/2014 Views: 15885

Description

When setting access permissions for a CIFS share, an error occurs:


> cifs access <share> <user> <permission>
Unknown user/group <user>

Procedure

There are three ways to grant or remove access to CIFS shares:

Microsoft Management Console

Filer Command Line

FilerView

A. Granting and removing access using Server Manager:


1. Log into a Windows NT workstation or server using a user account that
has Administrator rights to the storage system.
2. Start Server Manager
3. Select Computer
4. Click Select Domain
5. Enter the storage system's name for the Domain, in the format
\\Filername
6. Once the storage system's name is highlighted, click Computer
7. Click Shared Directories
8. Select the Share needed to modify the access

9. Click Permissions.
10.Add, delete, or modify rights for the users and groups listed.
11.Add or delete users and groups as needed.
12.Click OK
B. Granting and removing access using the CIFS access command:
1. Access the storage system console.
2. Use the cifs access command to grant or remove access to a share.
3. To add access to a share, type:
cifs access <share> [-g] <user|group> <rights>

Rights can be UNIX-style combinations of r w x - or NT-style "No


Access", "Read", "Change", and "Full Control".
Note:
For users and groups, using double quotes to enclose names with
spaces may be required. For example: "NETAPP\Domain Admins".
4. To remove access to a share type:

cifs access -delete <share> [-g] <user|group>

C. Granting and removing access using FilerView:


1. Log into FilerView and navigate to CIFS.
2. Select CIFS > Shares > Manage
3. For the share to be modified, on the right side under Operations,
select Change Access.
4. Add and Delete Users and Groups as needed.
@@@@@@@@@@@@@@@
What does CIFS share level ACL permissions '** priv access only **' and '** no
access **' mean?

KB Doc ID 3011997 Version: 5.0 Published date: 07/21/2014 Views: 3572

Answer

The following displayed Common Internet File System protocol (CIFS) shares' permissions
indicate that no share-level Access Control List (ACL) exists, only root or administrator on the
filer may access the particular shares:
Use the cifs access command to set permissions. See the cifs access man page or enter
cifs access on the filer for command syntax and details.
**priv access only** is displayed if the volume or qtree security style is UNIX
**no access** is displayed if the volume or qtree security style is NTFS or MIXED
Sample cifs shares output for UNIX and NTFS/MIXED security style respectively:
Name
------test1
test2

Mount Point
Description
-----------------------/vol/vol0/test1
** priv access only **
/vol/vol0/test2
** no access **

@@@@@@@@@@@@@

Will disabling oplocks disrupt existing locks?

KB Doc ID 3011634 Version: 9.0 Published date: 07/20/2016 Views: 2448

Answer

Disabling CIFS oplocks does not affect existing oplocks, it only disables any future oplocks. Any
existing CIFS oplocks continue to exist and are handled according to the criteria used when the
oplock was created.
options cifs.oplocks.enable off
qtree oplocks <qtree_path> disable

To disable oplocks at the filer level:

To disable oplocks at the qtree level:

Additional information on oplocks can be found in the File Access and Protocols Management
Guide for your version of Data ONTAP.
@@@@@@@@@@@@@

What is the maximum file size supported?

KB Doc ID 3010842 Version: 11.0 Published date: 02/11/2016 Views: 9068

Answer

Common Internet File System protocol (CIFS), Network File System (NFS) limitations
Maximum file size for CIFS shares and NFS exports
Write Any Where File Layout (WAFL) maximum file size limit
Beginning with Data ONTAP 5.1, files sizes of up to 8terabytes(TB) can be created. Data
ONTAP 7.0 allows a maximum file size of 16 TB. NetApp recommends a maximum file size of
500gigabyte(GB), to maintain optimal performance and to keep traditional backup/restore times
at a minimum.
Note:

File sizes are also bound by the amount of physical disk space available on
the filer, and the supported capacity on the controller

Regardless of the version of Data ONTAP in use, some protocols support files
only up to a certain size:

NFSv2 has a maximum file size of 2 GB

NFSv3 allows for 64-bit file offsets, which is larger than the WAFL 16 TB
maximum

Microsoft NTFS has a 16 TB file size limit in Windows 2003

CIFS protocol allows for 64-bit file offsets, but most likely the WAFL
limit will be reached beforehand

Certain client operating systems may also limit the file size they can access,
regardless of protocol

@@@@@@@@@@@@@@@@@
What syntax should I use in /etc/usermap.cfg if I have multiple Windows domains to
map UNIX users to windows accounts?

KB Doc ID 3010195 Version: 8.0 Published date: 11/20/2015 Views: 1649

Answer
1. Place *\* == * in the usermap.cfg file.
2. Use the option options.cifs.search_domains to provide the list of domain
names:
options.cifs.search_domains WIN2K, WINNT4

@@@@@@@@@
CIFS volume migration using Snapmirror
Before starting the migration need to check below options.

Allowing NetApp SnapMirror through a firewall


During a recent NetApp Snapmirror implementation we had a tremendous time
getting SnapMirror to work. After much troubleshooting we discovered that it was
due to ACLs on the customer switches. which details the firewall configuration
required.
TCP Ports used by NetApp SnapMirror
TCP 10566 (Source System binds on this port)
TCP 10569 (Source system listens on this port)
TCP 10565 (If using multipath, this is what the destination System listens on)
TCP 10565, 10567, 10568 (Destination System listens on these ports)
Just open TCP 10565 10569 bi-directional and be done with it (if you can get away
with it)
add the license on both filers.

Start the migration. non pick hours on storage.

1.First youll need to setup a SnapMirror relationship of the CIFS volume between
the source and destination filers.
2.Make a backup copy of the /etc/cifsconfig_shares.cfg file
3.Execute cifs terminate on the source filer (downtime starts here)

4.Update (quiesce if necessary) and break the SnapMirror relationship


5.Take the source filer offline
6.Assign the source filers IP to the new filer
7.Reset the source filers account in Active Directory & DNS (if applicable)
8.Execute cifs setup on the new filer
9.It goes without saying that you will assign the source filers hostname to the
destination filer, as well as join it to the AD (assuming the source filer was joined)
10.Execute cifs terminate on the destination filer and replace the
cifsconfig_shares.cfg with the backup copy you made in step 2
11.Execute cifs restart on the destination filer
12.Test client access.

If you don't want to remap share folders to user. use below commands.( if your
using both filers)
options cifs.netbios_aliases filler2. --->on filer1.
cifs nbalias load
@@@@@@@@@@@@@@@@@@@@

You might also like