useradmin user adduser_name-ggroup_name# To modify a local user
useradmin user modifyuser_name-ggroup_name# To list user information
useradmin user listuser_name# To delete a local user
useradmin user deleteuser_name# To add new group
useradmin group addgroup_name-rroleuseradmin group add Helpers -r admin
# To modify an existing group
useradmin group modifygroup_name-gnew_group_name# To list group information
useradmin group listgroup_nameuseradmin group list Helpers
# To delete a group
useradmin group deletegroup_name
# To add an existing Windows domain user to a groupuseradmin domainuser adduser_name-ggroup_name# To list Windows domain users in a groupuseradmin domainuser list -ggroup_name# To modify a share access
cifs access cifs access datatree1 administrator Full Control # To delete an ACL (share-level access control list) entry for a user on a share# Note: the -g option specifies that the user is the name of a UNIX group.cifs access -delete ## Multiprotocol options: # A CIFS user can access the file without disrupting UNIX permissions # When enabled, UNIX qtree's appear as NFS volumes. (ONTAP 7.2 or later)options cifs.preserve_unix_security on## Reconfiguring CIFS # Disconnect users and stop the CIFS servercifs terminate # Reconfigure the CIFS servicecifs setup # These options are needed to configure basic time servicesoptions timed.max_skew 5moptions timed.proto ntpoptions timed.sched hourly options timed.servers [server_ip_or_name,...]options timed.enable on options timed.log on ## Important configuration files in a Windows domain environment# contains the storage system SID/etc/filersid.cfg # contains the Windows domain SID/etc/cifssec.cfg # contains domain administrator accounts/etc/lclgroups.cfg# To resolve SID's runcifs lookup # Display your domain informationcifs domaininfo # Test the storage system connection to the Windows DC. cifs testdc [WINSsvrIPaddress]domainname[storage_sys_name]# To display the preferred domian controller listcifs prefdc print [domain] # To add a preferred domain controller listcifs prefdc adddomainaddress[address ... ]# To delete a preferred domain controller listcifs prefdc deletedomain Checklist for troubleshooting CIFS issues •Use\"sysstat –x1\" to determine how many CIFS ops/s and how much CPU isbeing utilized •Check /etc/messages for any abnormal messages, especially for oplock breaktimeouts •Use \"perfstat\" to gather data and analyze (note information from \"ifstat\\"statit\ •\"pktt\" may be necessary to determine what is being sent/received over thenetwork •\"sio\" should / could be used to determine how fast data can be written/readfrom the filer •Client troubleshooting may include review of event logs, ping of filer, test usinga different filer or Windows server •If it is a network issue, check \"ifstat –a\collisions •If it is a gigabit issue check to see if the flow control is set to FULL on the filerand the switch •On the filer if it is one volume having an issue, do \"df\" to see if the volume isfull •Do \"df –i\" to see if the filer is running out of inodes •From \"statit\" output, if it is one volume that is having an issue check for diskfragmentation •Try the \"netdiag –dv\" command to test filer side duplex mismatch. It isimportant to find out what the benchmark is and if it’s a reasonable one •If the problem is poor performance, try a simple file copy using Explorer andcompare it with the application's performance. If they both are same, the issueprobably is not the application. Rule out client problems and make sure it istested on multiple clients. If it is an application performance issue, get all thedetails about: The version of the applicationWhat specifics of the application are slow, if anyHow the application worksIs this equally slow while using another Windows server over thenetwork?◦The recipe for reproducing the problem in a NetApp lab•If the slowness only happens at certain times of the day, check if the timescoincide with other heavy activity like SnapMirror, SnapShots, dump, etc. onthe filer. If normal file reads/writes are slow:◦Check duplex mismatch (both client side and filer side)◦Check ifoplocksare used (assuming they are turned off)◦Check if there isan Anti-Virus application running on the client. Thiscan cause performance issues especially when copying multiple smallfiles◦Check \"cifs stat\" to see if the Max Multiplex value is near thecifs.max_mpxoption value. Common situations where this may needto be increased are when the filer is being used by a Windows TerminalServer or any other kind of server that might have many users openingnew connections to the filer. What is CIFS Max Multiplex?◦Check the value ofOpLkBkNoBreakAckin \"cifs stat\". Non-zero numbersindicate oplock break timeouts, which cause performance problem◦◦◦◦NFS Administration# Examples to export resources with NFS on the CLIexportfs -aexportfs -o rw=host1:host2 /vol/volX# Exportable resourcesVolumeDirectory/QtreeFile## Target examples from /etc/exports## Host - use name of IP address/vol/vol0/home -rw=venus/vol/vol0/home -root=venus,-rw=venus:mars# Netgroup - use the NIS group name/vol/vol0/home -rw=mynisgroup# Subnet - specify the subnet address/vol/vol0/home -rw=\"192.168.0.0/24\"# DNS - use DNS subdomain/vol/vol0/home-rw=\".eng.netapp.com\"# Rules for exporting Resources•Specify complete pathname, must begin with /vol prefix•Cannot export /vol, which is not a pathname to a file, directory or volume◦Export each volume separately•When export a resource to multiple targets, separate the target names with acolon (:)•Resolve hostnames using DNS, NIS or /etc/hosts per order in /etc/nssswitch.conf # Access restrictions that specify what operations a target can perform on a resource •Default is read-write (rw) and UNIX Auth_SYS (sys)•\"ro\" option provides read-ony access to all hosts •\"ro=\" optionprovides read-only access to specified hosts•\"rw=\" option provides read-write access to specified hosts •\"root=\" option specifies that root on the target has root permissions# Displays all current export in memoryexportfs # To export all file system paths specified in the /etc/exports file.exportfs -a # Adds exports to the /etc/exports file and in memory. # default export options are \"rw\" (all hosts)and \"sec=sys\".exportfs -p [options]path exports -p rw=hostA /vol/vol2/ora # To export a file system path temporarly without adding a corresponding# entry to the/etc/exports file. exporfs -i -o ro=hostB /vol/vol0/lun2 # Reloads ony exports from /etc/exports filesexportfs -r # Unexports all exportsexportfs -uav # Unexports a specific exportexportfs -u /vol/vol0/home # Unexports an export and removes it from /etc/exportsexportfs -z /vol/vol0/home # To verify the actual path to which a volume is exportedexportfs -s /vol/vol9/vf19 # To display list of clients mounting from the storage systemshowmount -a filerX # To display list of exported resources on the storage systemshowmount -e filerXnfsstat -m # To check NFS target to access cache exportfs -cclientaddr path[accesstype] [securitytype]exportfs -c host1 /vol/vol2 rw # To remove entries from access cacheexportfs -f [path] # Flush the access cache.exportfs -f # To add an entry to the WAFL credential cachewcc -a -uunixname-iipaddresswcc -u root # To delete an entry from the WAFL credentail cachewcc -xuname # To display statistics about the WAFL credential cachewcc -d -vunamewcc -d # Displays the UNIX user mappings for the specified Windows accountwcc -sntname# local admin wcc -s administrator# domain admin wcc -s development\\administrator # WCC rules •A Windows-to-UNIX user mapping is not stored in the WCC •The WCC contains the cached user mappings for the UNIX user identities (UID/GID) to Windows identities (SID's) •The wcc command useful for troubleshooting user mappings issues•the cifs.trace._login option must be enabled.•# factors that affect the NFS performance •CPU •Memory•Network •Network interface•System bus •Nonvolatile randown access memory (NVRAM)•I/O devices ◦Disk controllers◦Disks # Data ONTAP commands that can be used to collect performance datasysstatnetstatifstatstatsstatitnetdiagwafl_suspnfsstat nfs.mountd.tracenfs_histpktt # Client toolsethereal netapp-top.plperfstatsiosariostatvmstat # Displays per-client statistics since last zeroednfsstat -h # Displays list of clients whose statistics where collected on per-client basis# Note: nfs.per_client_stats.enable option must be set to \"on\"nfsstat -l # Zeroes current cumulative and per-client statisticsnfsstat -z # Includes reply cache statisticsnfsstat -c # Displays stats since boot timenfsstat -t # Displays reply cache statistics, incoming messages and allocated mbufs. # Note: most commonly used option to decode exports and mountd problems.# nfsstat -d # Displays number and type of NFS v2,v3 requests received by all FlexCache volumesnfsstat -C # To enable mountd tracing of denied mount requests against the storage system.option nfs.mountd.trace on # Display Top NFS clients currently most active for the storage systemnetapp-top.pl -i 30 filerX # Captures all needed performance information from the storage system and hosts(clients). perfstat -f filerX -h host1 -t 5 -i 12 >perfstat.out # Recommended NFS mount options for various UNIX hosts # Note: mount options \"forcedirectIO\" and \"noac\" are only recommended ondatabases.# Linux: rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3 Solaris: rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,forcedirectio,noac,vers=3AIX: cio,rw,bg,hard,intr,proto=tcp,vers=3,rsize=32768,wsize=32768 HPUX (11.31 or later): rw,bg,hard,intr,rsize=32768,wsize=32768,timeo=600,noac,forcedirectio 0 0# # Recommended test to collect NFS statistics# nfsstat -z (zero the NFS statistics at the client) netstat -I(network stats before the tests at the client) mount -o rsize=32768,wsize=32768 filerX:/vol/vol2/home /mnt/nfstestcd /mnt/nfstestnfsstat -m (output of the mountpoints and the mount flags)time mkfile 1g test (write test)time dd if=/mnt/nfstest/test /tmp/test (read test)time cp test test1 (read and write test)nfsstat -c (verify nfsstat output)# Check the nfsstat output against retransmissions, timeouts and bad calls•timeout>5%. Requests timing out before the server can answer them•badxid~timeout. Server slow. Check nfsstat -m.•badxid~0 and timeouts > 3%. Packets lost in the network; check netstat. Ifthis number is the same as bad calls, the network is congested.•retrans. May indicate network of routing problem if retransmit >5%.•null>0. Automounter timing out. Increase the timeout parameter on theAutomounter configuration.# In the output of the \"nfsstat -m`' command, the following parameters are critical•srtt. Smoothed round-trip time.•dev. Estimated deviation.# NFS troubleshootingProblem:Stale NFS File handleSample Error Messages - NFS Error 70Resolution Tips••••••Check connectivity to the storage system (server)Check mountpointCheck client vfstab or fstab as relevantCheck showmount –e filerx from clientCheck exportfs from command line of the storage systemCheck storage system /etc/exports fileProblem:NFS server not respondingNFS Server (servername) not respondingNFS client hangs, mount hangs on all clientsResolution Tips•••••••Use ping to contact the hostname of the storage system (server) from clientUse ping to contact the client from the storage systemCheck ifconfig from the storage systemCheck that the correct NFS version is enabledCheck all nfs options on the storage systemCheck /etc/rc file for nfs optionsCheck nfs licenseProblem:Permission deniednfs mount: mount: /nfs:Permission deniedResolution Tips•Check showmount –e filername from client•Try to create a new mountpoint•Check exportfs at the storage system command line to see what system isexporting•Check auditlog for recent exportfs –a•Check the /etc/log/auditlog for messages related to exportfs•Check the storage path with exportfs –s•Check whether the client can mount the resource with the exportfs –ccommand•Flush the access cache and reload the exports, then retry the mountProblem:Network Performance SlowPoor NFS read and/or write performanceResolution Tips•••••••••••Check sysstat 1 for nfs ops/sec vs. kbs/secCheck parameters on network card interface (NIC) with ifconfig –aCheck netdiagCheck network condition with ifstat –a; netstat –mCheck client side network conditionCheck routing table on the storage system with netstatCheck routing table on the clientCheck perfstat.shCheck throughput with sio_ntap toolCheck rsize and wsizeConsider configuring jumbo frames (entire path must support jumbo frames)Problem:RPC not respondingRPC: Unable to receive or RPC:Timed outResolution Tips•••••••Use ping to contact the storage system (server)From storage system, use ping to contact clientCheck mountpointCheck showmount –e filerX from clientVerify name of directory on the storage systemCheck exportfs to see what the storage system is exportingUse the \"rpcinfo -p filerx\" command from the client to verify that the RPCs arerunningProblem:No Space Left On DiskNo space left on disk errorResolution Tips•Check df for available disk space•Check for snapshot overruns•Check quota report for exceeded quotasData Protection and Retention1. What is Information Lifecycle Management(ILM)The concept of information lifecycle management (or data lifecycle management) isbased on assigning a value todata as it ages.Lifecycle consists of five phases.•Phase I - Data CreationData is created during the first phase of the ILM. The data created consists ofdynamic, static, and reference information.•Phase II - Data classification, Security, and ProtectionDuring this phase, data is classified, secured, and protected. Data regulation isimplemented at this phase.•Phase III - Data Migration (Backup and Recovery)In this phase, data migration is implemented.•Phase IV - Data Retention and Archiving•Phase V - Data DispostionBussines Continuance Solutions•snaprestoresnaprestore enables rapid revert (restore) of single files or volumes sooperations can resume quickly.•snapmirrorThe are two types of snapmirror solutions:◦Asynchronous snapmirrorThis is an automated file system or qtree replication for disasterrecovery of data distribution.Updates of new and changed data from the source to the destinationoccur on a scheduled defined by the storage administrator.◦Synchronous snapmirrorReplicates writes from the source volume to the partner destinationvolume at the same time it is written to the source volume.Updates are performed in real time intervals.•snapvaultIs a low-overhead, disk-based online backup of homogeneous storage systemsfor last and simple restores.•Open Systems SnalVault (OSSV)Is a heterogeneous disk-based data protection feature of Data ONTAP thatenables data stored on multiple Open Systems platforms (Windows/Unix basedclients)to be backed up to and restored from a central storage system.System Management Solutionssnaplock technology is a software feature that allows companies to implement the dataperfromance functionality of traditional WORM(write once, read many) storage in an easier-to-manage, faster access, lower costmagnetic disk-based solution.There are two types: •snaplock ◦snaplock Compliance Is designed for comprehensive archival solution that meets UDSecurities and Exchange Commission regulations for data retention. snaplock volumes of this type cannot be alteredor deleted before the expiration of retention period.◦snaplock Enterprise This solution is designed for organizations whith self-regulatedand best-practice requirements for protecting digital assets withWORM-like storage devices. Data written on a snaplockEnterprise volume can be deleted by an administrator. •Data Fabric Manager (DFM) Data Fabric Manager provides centralized management of distributed NetApp,NetCache, storage, and NearStore appliances.OS-Based Data Protection Solutionssnapshot technology Creates a read-only copy of a storage appliance's file system, readily accessible viaspecial subdirectories (i.e. .snaphot), taken automatically on a schedule manually. Creating snaphot copies is very quicly because if it an index to the filesystem. Disk Sanitization Disk sanitization is the process of physically removing data from a disk by overwritingpatterns on the disk in a manner that precludes therecovery of that data by any known recovery methods.2. SnapRestore SnapshotRestore Considerations •Time required for data recovery If the amount of corrupted data is smal, it is probably asier to copy files from asnaphot. If the amount of data to be recovered is large, it takes a long time to copy filesfrom a snapshot or to restore from tape. In this case, SnapRestore is perferred for recovering fromdata corruption. •Free space required for single file data recovery To use the single file SnapRestore feature, you must have enough free spaceon the volume to recover the singile file. •Reboot required for root volume recovery•Performance hit for single file SnapRestore A performance penality is encountered during snapshot deletion, because tehactive maps across all snapshot copies need to be checked. After doing a single-file SnapRestore, the system has tolook at all snapshots to see if it canfree the blocks in the file. When a block is allocated, it cannot be reallocated until it is freed in the activefile system and not in use by any snapshot. Reverting a Volume or File •You can use SnapRestore to revert a volume or file to a snapshot at any time•NOTE: Reverting an Aggregate snaphot will revert ALL volumes in theAggregate.•Prerequisites ◦SnapRestore licensed ◦Snapshots must exist of the appliance so that you can select asnapshot for the reversion. ◦The volume to be reverted must be online. ◦The volume to be reverted must not be a mirror used for datareplication. ◦Enough free space must be available for recovery of a single file.Cautions •You cannot undo a SnapRestore reversion! •Avoid selecting a snapshot taken before any SnapMirror snapshotIf you dpthis, Data ONTAP can no longer perform incrementalupdates to the mirror, it must recreate the baseline. •You cannot use SnapRestore to undo a snapshot deletion! •After you revert a volume, you lose all snapshots that were taken after theselected snapshot. •While SnapRestore is in progress, Data ONTAP cannot delete or createsnapshots. •Reverting a root volume requires a reboot, and will restore earlier configurationfiles.Steps to Revert a Volume 1.Notify network users 2.Review list of snaphotsavailable snap list volname 3.Enter the name of the snapshot to be used for reverting the volume snap restore -t vol -s snapshot_name path_and_volname4.Enter \"y\" to confirm reversion of the volume.NOTES: •Reverting an aggregate is not recommended! •NFS users should dismount the affected volume before the reversion. If theydo not dismount the volume they might see the \"Stale File Handle\" error messages after the reversion. Steps to Revert a File 1.Notify network users 2.Review list of snapshots available snap list volname 3.Enter the name of the snapshot to be used for reverting the file snap restore -t file -s snapshot_name -r new_path_and_filenamepath_and_filename 4.Enter \"y\" to confirm reversion of the file. NOTES: •A file can only be restored to an existing directory. The SnapRestore default isto restore the file to its original directory path. The \"-r\" option can be used to specify a different(existing) directory. •NFS users who try to access a reverted file without first reopening it might geta the \"Stale File Handle\" error message after the reversion. 3. SnapMirror snapmirror overview SnapMirror provides a fast and flexible enterprise solution for replicating data overlocal area, wide area and Fibre Channel networks. SnapMirror addresses mutiple applicationsareas such as mission critical data protection, and business continuanance in case of a disaster. •••• Data migrationDisaster Recovery Remote access to data and load sharingRemote tape archival SnapMirror Modes •Asynchronously. SnapMirror replicates snaphot images from a source volumeto a partner destination volume at the same time itis written to the source volume. •Synchronously. SnapMirror replicates writes from a source volume or qtreeSnapMirror Terminology •Source:storage appliance system whose data is to be replicated.•Destination:storage system which contains data replica. •Volume SnapMirror (VSM):Replication process from a source volume to adestination volume. •Qtree SnapMirror (QSM):Replication process from a source qtree to adestination qtree.SnapMirror Components •Source volumes and qtrees:SnapMirror source volumes and qtrees arewritable data objects. •Destination volumes and qtrees:the SnapMirror destination volumes andqtreesare read-only objects, usually on a separate storage system. The destination volumes and qtrees are normally accessed by users ony whena distaser takes down the source system and the administrator uses SnapMirror commands to make the replicated dataat the destination accessible and writable. Async SnapMirror Theory of Operation •The VSM initial baseline transfer ◦Create a restricted destination volume ◦For VSM 1st time replication, all data in all snapshots on the source aretransferred to the destination volume. ◦The baseline transfer is initiated and driven by the destination byestablishing a TCP connection with the source. ◦Read-only destination volume brought online after initial transfercompleted. •The QSM initial baseline transfer ◦Do not create a destination qtree; it is created automatically upon first-time replication ◦For QSM, no snapshots are sent from the source to the destination◦Within QSM, the destination qtree is read-only, while the hostingvolume is writeable. •Incremental updates process ◦Scheduled process updates the mirror (destination system). After thesource and destination file systems are synchronized for the first time,you can schedule incremental updates using thesnapmirror.conffile.This file must created on the destination root volume (/etc).◦Current snapshot is compared with the previous snapshot◦Changes are synchronized from source to destination Volume versus Qtree SnapMirroring •VSM can be synchronous or asynchronous, while QSM is available withasynschronous mode only. •VSM is a block-for-block replication. QSM is a file based replication. •VSM can occur only with volumes of the same type (both must be traditional orflexible) •With VSM, the destination volume is always a replica of a single source volumeand is read-only. •With QSM, only the destination qtree is read-only, while the containing volumeremains writeable. •VSM replicates all Snapshot copies on the source volume to the destinatiovolume. QSM replicates only one snapshot of the source qtree to the destination qtree. •VSM can ne initialized using a tape device (SnapMirror to tape); QSM does notsupport this feature. •Cascading of mirrors is supported only for VSMTraditional and Flexible Volumes •For VSM:Like to Like transfers only: flex-toflex or trad-to-trad•For QSM:you can snapmirror qtrees: ◦From a traditional volume to a flexible volume◦From a flexible volume to a traditional volume SnapMirror and Flexvol Space •Space guarantee ◦volume-disable automatically on the destinal volume ◦As a result, it is possible to overcommiting the aggregate ◦When the relationship is broken, space mode is identical on source anddestination •Overcommiting the aggregate ◦More efficient disk space utilization on the destination◦When the relationship is broken, turn offvol options fs_sized_fixedand usevol sizeto re-size the destnation volume.◦To overcommit an aggregate volume, create the destination flexvolwithquaranteeset tononeoffile SnapMirror Control Files •On the source system:/etc/snapmirror.allow •On the destination system: /etc/snapmirror.confSyntax: source:src_vol destination:dst-vol arguments schedule source:/vol/src_vol/src_qtree destination:/vol/dst_vol/dst_qtreearguments schedule src_hostname:/vol/src_vol/- dst_hostname:/vol/dst_vol/dst_qtree(\"-\" indicates all non-qtree data in the specified volume)Arguments kbs=kbs Maximum transfer speed, in kilobytes per second, that Data ONTAP can use to transferdata. restart={ never | always | defaut } Restart mode that SnapMirror uses to continue an incremental transfer from acheckpoint if it is interrupted: •Never:Transfers are always restarted from the beginning of a transfer andnever from where they were before an interruption •Always:Transfers are always restarted if possible from where they werebefore an interruption •Default:Transfers are restarted if they do not conflict with a scheduledtransfer.Schedule For asynchronous SnapMirror, a schedule must be set per relationship and consists of:minute hour day_of_month day_of_weekWhere: •••••• minutecan be a value from 0-59 hourcan be 0 (midnight) to 23 (11 pm)day_of_monthcan be 1-31 day_of_weekcan be 0 (sunday) to 6 (saturday)all possible values can applied with an \"*\" A \"-\" means \"never\" and prevents this schedule entry from executing Options Snapmirror •••••• snapmirror.access (it set tolegacy, the snapmirror.access file is used)snapmirror.enablesnapmirror.log.enable snapmirror.checkip.enable snapmirror.delayed_acks.enablesnapmirror.window_size Caution With VSM, if you upgrade your systems to a later version of Data ONTAP, upgrade theSnapMirror destination before you upgradethe SnapMirror source system.Async SnapMirror Pre-requisites •Make sure the source volume or qtree is online•For VSM ◦Create a non-root restricted destination volume ◦The snapmirror source volume can be the root volume◦Destination volume capacity > or = to source ◦Disks checksum type (block or zone checksum) must be identical◦Quota cannot be enabled on destination volume •For QSM ◦Destination qtree must not exist and cannot be /etc◦Destination volume must have 5% extra space◦A destination qtree can be on the root volume •TCP port range 10565-10569 must be open (destination system contacts thesource at TCP port 10566)Snapmirror Snapshot copies are distinguished from the system Snapshot copies by amore elaborate naming convention and the labelsnapmirrorin parentheses with the \"snap list\" command.The default name of a SnapMirror volume snapshot is: Dest_system(sysid)_name.number The default name of a SnapMirror qtree snapshot isDest_system(sysid)_name-src.number|dst.numberSteps to convert a Replica to a Writeable File System •To convert a mirror to a read/write volume or qtree, you must usesnapmirrorquiesceprior to usingsnapmirror break ◦the snapmirror relationship is broken-off ◦the destination volume or qtree becomes writeable ◦Learn how to enable quotas on the converted file system •What next? ◦you can resynchronize the broken-off relationship or ◦you can release the relationship if you want to make the breakpermanentNOTE:If you use SnapMirror for data migration, you can copy the/etc/quotasentriesfrom the source to the/etc/quotasfile of the destination before you convert the mirror to a regular volume or qtree. However, if you use SnapMirrorfor Disaster Recovery,you must keep a copy on the destinationstorage system of all/etc/quotasentries used by the source.Example: Dest> snapmirror quiesce /vol/dst_vol/dst_qtreeDest> snapmirror break /vol/dst_vol/dst_qtreeDest> quota on dst_vol Resynchronize a Broken Relationship When the relationship is broken, subsequent updates will fail. To resume incrementalupdates, you have first to re-establish the relationship. Thesnapmirror resync(on the source or destination) commandenables you to do this without executing a new initial base transfer.Example (from the destination system): Dest> snapmirror resync [options ] dst_hostname:dst_vol Dest> snapmirror resync [options ] dst_hostname:/vol/dst_vol/qtreeReleasing a Partner Relationship You can release a mirror volume or qtree when you want to permanently remove itfrom a mirrored relationship. Releasing the mirror deletes Snapshot copies from the volume.Example (from the source): Src> snapmirror release src_vol dst_hostname:dst_vol Src> snapmirror release /vol/src_vol/src_qtree dst_hostname:/vol/dst_vol/dst_qtree NOTE:To make this removal permanent, delete the entry in/etc/snapmirror.conffile. SnapMirror to Tape SnapMirror to tape supports SnapMirror replication over low-bandwidth connections byaccommodating an initial mirror between the source and destination volume using a physicallytransported tape. 1.2.3.4. Initial large-sized vol baseline snapshot replicated to tape from source filer.Tape physically transported to tape drive on destination siteLoad the tape and start replication to the destination filer. Incremental SnapMirror updates are made via low bandwidth connection Example: 1.Load the tape and start replication: Src> snapmirror store src_vol tapedevice The command starts the initial baseline transfer from the source volume to thetape. 2.Remove the base snapshot on the source when the backup to tape is completed: Src> snapmirror release src_vol tapedevice3.Restore data from tape to the destination system: Dest> snapmirror retrieve dst_vol tapedevice NOTE:If you retrieve a backup tape into a file system that does not match the diskgeometry of the source storage system used when writing the data onto tape, the retrieve can beextremely slow. 4. SnapVault SnapVault is a disk-based storage backup feature of ONTAP that enables data storedon multiple storage systems to be backuped up to a central, secondary NetApp storage system as read-onlySnapshot copies. Both primary and secondary systems use SnapVault for backup and restore operations,based on the same logical replication engine as qtree SnapMirror. SnapVault Basic Deployment •SnapVault Primary to Secondary to Tape ◦Enables to store an unlimited number of backups offline◦Can be used to restore data to the SnapVault secondary◦Reduces media costs •SnapVault Primary to Secondary to SnapMirror ◦SnapMirror backup and standby service for SnapVault◦SnapMirror backup and restore protection for SnapVaultThe primary system's data is backup up to a secondary system. Then VolumeSnapMirror is used to mirror the data stored on secondary to a tertiary system (SnapMirror destination) at the remote data center. •Qtree is the basic unit of SnapVault backup •Data restored from the secondary Qtrees can be put back to their associatedQtrees •Supports backup of non-qtree data and entire volume on primary to a Qtree onthe secondary •When you back up a source volume, the volume is backed up to a Qtree on thesecondary •Qtrees in the source volume becomes directories in the destinations qtree •SnapVault cannot restore data back to a volume (volume is restored as a qtreeon the primary) •The maximum number of secondary system Qtrees per volume is 255•The maximum total of Snapshot copies per destination volme is 251 •A separate SnapVault license for the primary and the secondary system isrequiredRestoration on Request Users can perform a restore of their own data without intervention of systemadministrator. To do a restore, issue thesnapvault restorecommand from the primary systemwhos qtree needs to be restored. Ater succesful restore of data, you use thesnapvault start -rcommand to restartthe SnapVault relationship between primary and secondary qtree. Note: when you usesnapvault restorecommand to restore a primary qtree,SnapVault places a residual SnapVault Snapshot copy on the volume of the restored primary qtree. This Snapshot copy is notautomatically deleted. You cannot use thesnapvault restorecommand to restore a single file. For singlefile restores, you must use thendmpcopycommand.The NearStore Personality NearStore Personality allows to utilize FAS storage systems as secondary systems. •Converts the destination storage system to a NearStore system •Increase the number of concurrent streams on destinations system when usedfor SnapMirror and SnapVault transfers •Requiresnearstorage_optionlicense on secondary and ONTAP 7.1 or later•The license should not be installed on these systems if hey are used to handleprimary application workloads •Supported only on FAS3000 seriesTCP Port requirements Port 10566 must be open in both directions for SnapVault backup and restoreoperations. If NDMP is used for control management, then Port 10000 must be open on primaryand secondary. Configuring SnapVault Primary and Secondary systems •On Primary ◦Add the licensesv_ontap_pri ◦Enable SnapVault service and configure SnapVault options •On Secondary ◦Add the licensesv_ontap_sec ◦Enable SnapVault service and configure SnapVault options◦Initialize the first baseline transfer •On Primary and Secondary ◦Schedule SnapVault snapshots creation ◦Monitor transfers progress, status and snapshotsExample: Pri> license add sv_primary_license_codePri> options snapvault.enable onPri> options ndmpd.enable on Pri> options snapvault.access host=secondary_hostname Sec>license add sv_secondary_license_codeSec>options snapvault.enable onSec>options ndmpd.enable on Sec>ons snapvault.access host=primary_hostname1,primary_hostname2Sec> snapvault start -S pri_hostname:/vol/pri_vol/pri_qtree sec_hostname:/vol/sec_vol/sec_qtree(sec_qtree must not exist on sec_vol)Pri/Sec> snapvault status Schedule SnapVault Snapshot CreationCommand Syntax: Pri/Sec> snapvault snap sched (-x) vol_name snapshot_nameretention_count@day_of_the_week@hour(s) snapshot_nameis the snapshot copy basename. It must be identical onprimary and secondary for a given scheduled data set retention_countdefines the numer of SnapVault Snapshot copies you want tomaintain for archiving. \"-x\" parametercauses Snapvault to copy new or modified data from primaryqtree to its associated qtree on the secondary Snapshot schedule results: This schedule keeps the two most recent weeklySnapshot copies, the six most recent nightlySnapshot copies, and the eight most recent hourlySnapshot copies, created at 8 a.m., noon, 4p.m., and 8 p.m. every day. Whenever the Snapshot schedule creates a newSnapshot copy of a particulartype, it deletes the oldest one and renames the existing ones. On the hour, for example, the system deletes hourly.7, renames hourly.0 tohourly.1, and so on.Example: Pri> snapvault sched pri_vol sv_hourly 11@mon-fri@7-18Sec> snapvault sched -x sec_vol sv_hourly 11@mon-fri@7-18Pri/Sec> snapvault status -qPri/Sec> snap list -q vol_name SnapVault commands •Perform the initialization baseline transfer from the primary qtree to thesecondary qtreeSec>snapvault start -k •Resume the SnapVault relationship between the restored qtree and its backupqtree on the secondarySec>snapvault start -r -Spri_hostname:/vol/pri_vol/pri_qtreesec_hostname:/vol/sec_vol/sec_qtree •Removes a qtree on the secondary from the protection scheme and deletes itSec> snapvault stop sec_hostname:/vol/sec_vol/sec_qtree •Forces an incremental update of the snapshot specified on the primary andtransfer it to the secondary.Sec> snapvault update [options] -Spri_hostname:/vol/pri_vol/pri_qtree sec_hostname:/vol/sec_vol/sec_qtree •Alter the characteristics of a SnapVault relationship, including the transferspeed, the number of re-tries and the primary and secondary pathsSec> snapvault modify -k •Display SnapVault status information on primary or secondaryPri/Sec> snapvault status •Halts a SnapVault transfer currently in progress, this operation will abort atransfer from the primary to the secondarySec> snapvault abortsec_hostname:/vol/sec_vol/sec_qtree•Manually creates a snapshot on the primary or secondaryPri/Sec> snapvault snap createvol_name snapshot_name•Unconfigures a snapshot schedule on primary or secondarysnapvault snap unsched -fvol_name snaphot_name •On primary, lists all the known destinations for SnapVault primary qtrees.Pri> snapvault destinations •On primary, release Snapshot copies that are no longer neededPri> snapvault releasesec_hostname:/vol/sec_vol/sec_qtree•Restores a qtree from the secondary to the primary Pri> snapvault restore -ssnap_namepri_hostname:/vol/pri_vol/pri_qtree sec_hostname:/vol/sec_vol/sec_qtree Comparing SnapVault with SnapMirror •VSM copies all snaphots from a read/write source into a read-only destination•Qtree SnapMirror is to be used in an environment requiring an immediatefailover capability •SnapVault is to be used with applications that can afford to lose some data andnot require immediate failover •Qtree SnapMirror allows replication in both directions (source and destinationcan run on same storage system) •Qtree SnapMirror does not allow snapshot creation or deletion on the read-onlydestination. •SnapVault replicates in one direction (source and destination cannot run onsame storage system) •SnapVault adds snapshot scheduling, retention, and expiration, providingversions (backups) on secondary •SnapMirror provides up to per minute updates•SnapVault provides up to per hour updates Throttle Network Usage of SnapMirror and SnapVault Transfers •On per transfer basis: ◦For SnapMirror, use kbs option in the/etc/snapmirror.conffile onthe secondary/destination ◦For SnapVault, use the-k •For all transfers ◦Requires ONTAP 7.2 or later ◦Enable system wide throttling (default is off) on all systems:optionsreplication.throttle.enable on ◦Set max bandwidth (default is unlimited) for all incoming transfers onsecondary:options replication.throttle.incoming.max_kbs ◦Set max bandwidth (default is unlimited) for all outgoing transfers onprimary:options replication.throttle.outgoing.max_kbs Backup with Failover In case of a disater, when the prinmary becomes unavailable, you might want toconvert the read-only qtrees replica to a writeable file system to redirect CIFS and NFS clients access to the secondary.SnapVault does not currently have the ability to create a writable destination on thesecondary. You can use SnapMirror/SnapVault bundle to convert the SnapVault destination qtree to aSnapMirror destination qtree, making it a typical SnapMirror destinations qtree that can be quiesced and broken.Requirements for SnapVault/SnapMirror Bundle •ONTAP 6.5 or later •SnapVault primary license Note: if any changes made while in the broken state need to be copied back tothe primary, you also need a SnapMirror license on primary•SnapVault/SnapMirror bundle license A SnapMirror license is required on the secondary to have access to thesnapmirror convertcommand inpriv set diag mode Make a Secondary Qtree Writeable ••••• Involve NetApp Support (when entering thepriv set diagmode)Convert snapvaulted qtree to a snapmirrored qtreeQuiesce the snapmirror qtree Break the mirror, making it writeableRestablishing the SnapVault relationship ◦Preserve the changes made on the secondary◦Or discard all changes made on the secondary Example: 1. Convert the SnapVault qtree into a SnapMirror qtree: Sec> snapmirror off Sec> options snapvault.enable offSec> priv set diag Sec*> snapmirror convert /vol/sec_vol/sec_qtree 2.Quiesce the destination qtree and break the releationship (makes qtree writable) Sec*> snapmirror on Sec*> snapmirror quiesce/vol/sec_vol/sec_qtreeSec*> snapmirror break/vol/sec_vol/sec_qtree Restablishing the SnapVault relationshipThere are two scenarios. Scenario 1 : Preserve all the changes made to the secondary during th DRperiod. 1. Resync the primary qtree Pri> snapmirror resync /vol/pri_vol/pri_qtree2.Quiesce the qtree Pri> snapmirror quiesce/vol/pri_vol/pri_qtree3.Break the mirror, making it writable Pri> snapmirrow break/vol/pri_vol/pri_qtree4.Resync the secondary qtree Sec> snapmirror resync/vol/sec_vol/sec_qtree5. Turn SnapMirror and SnapVault off Sec> snapmirror offSec> snapvault off 6. Convert the SnapMirror qtree to SnapVault qtree Sec> snapvault convert/vol/sec_vol/sec_qtree7. Turn SnapMirror and SnapVault on Sec> snapmirror onSec> snapvault on Scenario 2: Discard all the changes made on the secondary during the DRperiod. 1. Resync the secondary qtree Sec> snapmirror resync /vol/sec_vol/sec_qtree2. Turn SnapMirror and SnapVault off Sec> snapmirror offSec> snapvault off 3. Convert the SnapMirror qtree to SnapVault qtree Sec> snapvault convert/vol/sec_vol/sec_qtree4. Turn SnapMirror and SnapVault on Sec> snapmirror onSec> snapvault on 5. Best Practices and Troubleshooting Optimize Mirror Performance and Recommendation The following methodilogy will help in troubleshooting SnapMirror, SnapVault andOSSV (Open Systems SnapVault) performance issues.Performance issues are mainly due to: •Overloaded SnapMirror/SnapVault implementation•non-optimal space & data layout management •High system resources utilization (CPU% util, disk I/O, CIFS/NFS connections/transmissions, etc) •Low network bandwidthSymptoms are: •Initialization or transfer updates lagging, the lag is above the expectation, andthe transfer duration does not meet the SLA •The transfer duration meets the SLA, but the throughput is low. ◦Check/etc/snapmirror.conforsnapvault snap sched, define whatis the expected lag(exptected time between two scheduled updates) ◦Then explore thesnapmirror status -lorsnapvault status -loutputs to get a view of the mirror implementation:▪How many systems are involved?▪How many mirror/backup services are active?▪Which systems are a source and a destination at the sametime?▪How many relationships are set per source and destinationsystems?◦Note the transfer lag and define the data/time the last transfersucceeded◦Analyze the SnapMirror logs/etc/log/snapmirrorand syslogmessages/etc/messagesto trace what happened before and after thelast succesful transfer has completed:when was the request sent, started and ended. Are there any errors?6. NDMP Fundamentals7. SnaplockSAN AdministrationNAS versus SAN•NAS provides file-level access to data on a storage system. Access is via anetwork using ONTAP services such as CIFS and NFS•SAN provides block-level access to data on a storage system. SAN solutionscan be a mixture of iSCSI or FCP protocols.•SAN provides block access to LUN's (logical uniit numbers) which are treatedas local disk by both Windows and UNIX-based systems.•Network access to LUN's is via SCSI over Fiber Channel (FCP) network(refferred to asFabric) or SCSI over TCP/IP (Ethernet) network.•Network access to NAS storage is via an Ethernet network.•FCP and ISCSI protocol carry encapusulated SCSI commands as the datatransport mechanism.•When SAN ans NAS storage are present on the same storage system, it isrefferred to asunified storage.•Fabrics generally refer to FC connections through a switch.Initiator/Target Relationship1.2.3.1.The host (initiator) moves requests to the storage system (target)An application sends a request to the file systemThe file system issues I/O calls to the operating systemThe operating system sends the I/O through its storage stack (SCSI driver) toissue the SCSI commands2.The SCSI commands are encapsulated in FC frames or iSCSI IP packets3.Once the request is received by the storage system target, Data ONTAP OSconverts requests from the initiator4.Data ONTAP turns SCSI commands into WAFL operations5.WAFL sends the request to the RAID subsystem where RAID manages data on the physical disks where the LUN is located 6.Once processed, request responses move back through the FC fabric or iSCSI networkHow are Initiators and Targets connected(FC-SAN) •Storage systems and hosts have HBA's (Host Bus Adapters) so they can beconnected directly to each other or to FC switches •Each FCPnode is identified by a World Wide Node Name (WWNN) and a WorldWide Port Name (WWPN) •WWPN's are used to create igroups, which controls host access to specificLUN'sHow are Initiators and Targets connected(IP-SAN) •Storage systems/controllers are connected to the network over stand Ethernetinterfaces or through target HBA's •Nodes are identified in IP SAN enivironments using a node name. The are twoformats,iqnandeui ◦iqn.1998-02.com.netapp:sn.12345678◦eui.1234567812345678 •The host node names are used to create igroups, which controls host access tospecific LUN's Fabric or Network Architectures •NetApp supports all industry accepted fabric and network architectures•Types if architectures are: ◦Single switch◦Cascade◦Mesh ◦Core-Edge◦Director •The maximum suported hop count for FC switches, which is the number ofinter-switch links (ISL's) crossed between a host and the storage system, islimited to three (3) •Multivendor ISL's are not supportedZoning for an FC SAN •Zones separate devices into separate subsets•\"Hard\" zoning ◦Restricts communication in a switched fabric ◦Prevents zoning breaches caused by bypassing the name service •\"Soft\" zoning ◦Separate devices at the name service level but does not restrictcommunication between zones◦More flexible, less secure •Similar to Ethernet VLAN's•Zones live on the switch •A FC channel zone consists of a group of FC ports or nodes that cancommunicate with each other •Two FC nodes can communicate with one another only when they arecontained in the same zone •The name service converts a name into a physical address on the networkFC SAN Topologies •Direct Attached (Point-to-Point) •Fiber Channel Arbitrated Loop (FCAL) ◦A private loop works FC hubs. This Loop can address 127 devices dueto limitation of 8-bit addresses ◦A public loop works in a fabric with switches. This loop can address 15million addresses due to its 24-bit addressing schema •Switched Fabric •NetApp supports three basic FCP topologies between storage system targetsand host initiators: ◦Direct-Attached◦Single Fabric◦Dual FabricIP SAN topogies •NetApp differentiates between two basic topologies: ◦Direct-Attached: The initiators (hosts) are directly attached to thetarget storage controller using a cross-over cable ◦Switched environment: the hosts are attached to storage controllersthrough Ethernet switchesGuidelines for creating Volumes with LUN's •Do not create any LUN's in the systems root volume. •Ensure that no other files or directories exist in a volume that contains a LUN.Otherwise, use a separate qtree to contain LUN's. •If multiple hosts share the same volume, create a qtree on the volume to storeall LUN's for the same host. •Ensure that the volume optioncreate_ucodeis on (this is off by default) •Use naming conventions that reflect the LUN's owner or the way that the LUNis used.Create and Access a LUN There a three steps required on the storage system and two additional steps performedon the FCP or iSCSI accessed host. •On the storage system: ◦Create a LUN ◦Create an igroup (FCP or iSCSI) ▪mapping a LUN to an igroup os often refferred to as \"LUNmasking\" ▪igroups may be created prior to creating LUN's ▪there is no requirement to populate the igroup with a WWPN(FCP) or node name (iSCSI) before mapping a LUN to an igroup ◦Map the LUN to the igroup •On the host: ◦FCP: bind the HBA of the host to the storage system's WWPN (AIX andHP do not require persistent bindings) ◦iSCSI: configure the iSCSI initiator to access the target◦Configure (i.e. format) the LUN for use on the host Methods for LUN creation •lun create (storage system) ◦additional steps: ▪igroup create(create initiator group) ▪lun map(maps the LUN to an initiator group) ▪add portset (FCP) - consists of a group of FCP target ports. Youbind a portset to an igroup to make LUN available only on asubset of FCP ports. •lun setup(storage system) •FilerView (host) - web-based application •SnapDrive (host) - designed specifically for LUN managementBind host HBA to WWPN (FCP igroup) Persistant binding permanently bind a particular target ID on the host to the storagesystem WWPN. On some system you must create persistent binding between the storage system (target) and the host (initiator) to guarantee thatthe storage system is always available at the correct SCSI target IDon the host. Use the commandfcp show adaptersto display the WWPN for each HBA on thestorage system. On the Solaris host, use one of the following methods to specify the adapter on thestorage system HBA. ••••• create_binding.pl /usr/sbin/lpfc/lputil/kernel/drv/lpfc.conf HBAnywhere (Emulex adapter)SANsurfer (Qlogic adapter) To determine the WWPN of the HBA installed on the AIX or HPUX host. •sanlun fcp show adapter -c •The \"-c\" option will generate the complete command necesarry for creating theigroup •Use the WWPN when you create a FCP type initiator groups on the storagesystemTo find the WWPN for the WWPN installed on the Linux host. •modprobe driver_name- loads the driver •The system creates/proc/scsi/driver_namedirectory that contains a file foreach QLogic HBA port. The WWPN is contained in the file for that port. •Look in each/proc/scsi/driver_name/HBA_port_numfile and get the WWPN.The filename is the HBPA port number. Storage system commands for Initiators and Targets •Host initiators HBA's ◦fcp show initiators 0a •Filer (storage) target HBA's ◦fcp show targets 0a Access LUN's on Solaris (FCP igroup) LUNs created on storage system that will be accessed via FCP must be configured onthe SUN Solaris host. •Edit/kernel/drv/sd.conffile with the appropriate target and LUN IDs. The/kernel/drv/lpfc.conffile will help determinewhat should be in thesd.conffile. •Rundevfsadmon the host to allow discovery of the new LUN's or use thereboot commandreboot -- -r •Use thesanluncommand to verify that the new LUN's are now visable•Use theformatcommand to label the new LUN's as Solaris disks•Create a UNIX file system on the disk, or use it as a raw device Access LUN's on AIX (FCP igroup) Configure with native AIX LVM (Logical Volume Manager) •Runcfgmgrcommand to discover the new LUN's. Allows the host to log intothe fabric, check for new devices and create new device entries. •Run thesanlun lun showcommand to verify that the host has discovered thenew LUN's •Runsmit vgcommand to create a volume group.•Rimsmitto accessstorage on a volume group.•Runsmit fsto create a file system •Runlsvg newvgcommand to verify the information on the new volume group.Access LUN's on HPUX (FCP igroup)Discover the new LUN's on HPUX. •Runioscanto discover the LUN's. •Runioinit -i orinsf -ecommand to create device entries on the host.•Check to see which disk devices map to which HBA devices (tdlistorfcdlist) •Runsanlun lun show -p allcommand to display information about devicenodes. •Use HPUX LVM or VERITAS Volume Manager to manage the LUN'sAccess LUN's on Linux (FCP igroup)To configure the LUN's on Linux. •Confiure the host to find the LUN's (reboot ormodprobe)•Verify that the new LUN's are visable (sanlun lun showfiler_name:path_name) •Enable the host to discover new LUN's (modprobe)•Label the new LUN's as Linux disks: ◦File system:fdisk /de/sd[char] ◦Raw access: userawcommand to bind the raw device to the blockdevice Access LUN's on Solaris (iSCSI igroup)To configure the iSCSI LUN's on Solaris. •Configure an iSCSI target for static or dynamic discovery ◦Sendtargets (dynamic):iscsiadm add discovery-addressIpaddress:port ◦iSNS (dynamic):iscsiadm iSNS-server Ipaddress:port ◦Static:iscsiadm add static-config eui_number,Ipaddress •Enable an iSCSI target delivery method ◦SendTargets:iscsiadm modify discovery --sendtargets enable◦iSNS:iscsiadm modify discovery --isns enable◦Static:iscsiadm modify discovery --static enable •Discover the LUN's withdevfsadm -i iscsi •View LUN's with/opt/NTAP/SANToolkit/bin/sanlun lun show all•Create file systems withformatcommand •Make iSCSI devices available on reboot. Add an entry to the/etc/vfstabfile. Administer and Manage LUNs The commands are used to manage LUNs. •Take LUN's offline and online ◦lun online lun_path [ lun_path ]◦lun offline lun_path [ lun_path ] •Unmap a LUN from an igroup ◦offline the LUN using thelun offlinecommand◦lun unmaplun_pathigroup LUN_ID •Rename a LUN ◦lun movelun_path new_lun_path •Resize a LUN ◦offline the LUN using thelun offlinecommand◦lun resize [-f]lun_path new_size •Modify the LUN description ◦lun commentlun_path [comment] •Enable or disable space reservation ◦lun set reservationlun_path[enable | disable] •Remove a LUN ◦offline the LUN using thelun offlinecommand or use the \"-f\" optionwith thelun destroycommand.◦lun destroy [-f]lun_path LUN migration and Mapping •LUN's can be migrated to another path (lun move lun_path new_lun_path) inthe same qtree or volume. •Separate LUN maps are maintained for eahc initiator group ◦Two LUN's mapped to the same igroup must have unique LUN ID◦You can map a LUN ony once to an igroup ◦You can add a single initiator to multiple igroups •To migrate a LUN from one igroup to another, use the commands: ◦lun unmap /vol/vol1/lun1 igroup1 3◦lun map /vol/vol1/lun1 igroup2 3New and Changed SAN-related Commands for Data ONTAP 7.0 •cf takeover -nenables clustered giveback operation when different versionsof Data ONTAP are used •fcadminconfigures the FAS6000 FC cards to operate in SAN target mode orinitiator mode •lun clone createcreates a LUN clone •lun clone split (start, status, stop)splits a clone, display status ofclone splitting, stops the clone splitting processNew and Changed SAN-related Commands for Data ONTAP 7.1 •iscsi tpgroupmanages the assignment of storage system network interfacesto target portal groups •portset (help,add,create,destroy,remove,show)lists portsets, add portsto portsets, creates new portsets, destroys portsets, remove ports from a portset, show ports in a portset •fcp confignew includes thespeedoption, allowing you to change the speedsetting for an adapter (4,2,1, auto which is the default)New and Changed SAN-related Commands for Data ONTAP 7.2 •igroup renameallows you to rename an igroup LUN cloning •A LUN clone is a point-in-time, writable copy of a LUN in a snapshot copy.•The LUN clone shares space with the LUN in the backing snapshot copy. ◦unchanged data on the original snapshot ◦changed data written to the active file system •Sample usage for testing ◦Use LUN cloning for long-term usage of writable copy of a LUN in asnapshot copy ◦after LUN clone operation is complete, split the LUN clone from thebacking snapshot copy anddelete the snapshot copy.
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- baijiahaobaidu.com 版权所有 湘ICP备2023023988号-9
违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务