Quantcast
Channel: Eric Fleischman's WebLog
Viewing all 60 articles
Browse latest View live

Getting a log from ADAMSync

$
0
0

Over the course of the next few posts we’re going to start modifying all sorts of things in the configuration. Depending upon the particulars of your environment this might or might not pan out. :) As such, we should probably take a quick look at the logging available before we break anything too badly.

When you run ADAMSync there’s a switch to give you enhanced logging:
  /log [log file]                       -- Log messages, use "-" option to log to screen

I’m typically a fan of using a filename rather than – as the logs tend to get quite large.

From our run yesterday, here’s some of the output of the log (snipped for brevity):

Adamsync.exe v1.0 (5.2.3790.2021) Establishing connection to target server localhost:50000. Saving Configuration File on DC=SyncTargetDC Saved configuration file. ADAMSync is querying for a writeable replica of erictest.local. Establishing connection to source server efleis-df2.erictest.local:389. Using file .\damF.tmp as a store for deferred dn-references. Populating the schema cache Populating the well known objects cache Starting synchronization run from dc=erictest,dc=local. Starting DirSync Search with object mode security.

<snip>

Processing Entry: Page 1, Frame 1, Entry 4, Count 1, USN 0 Processing source entry <guid=51b2fc571aa27c4e9a488e7b79d1d5e1> Processing in-scope entry 51b2fc571aa27c4e9a488e7b79d1d5e1. Adding target object CN=Users,dc=SyncTargetDC. Adding attributes: sourceobjectguid, objectClass, description, instanceType, showInAdvancedViewOnly, lastagedchange,  Previous entry took 0 seconds (31, 31) to process

<snip>

Beginning processing of deferred dn references. Processing deferred modifications for 53d9609b8ee6014c947f57d3fc850aab:ipsecISAKMPReference. + Synchronizing dn-ref to 39b749b601b1f547a2a76a97e5beb0f2.

<snip>

Finished processing of deferred dn references.
Finished (successful) synchronization run.
Number of entries processed via dirSync: 169
Number of entries processed via ldap: 3
Processing took 7 seconds (0, 1082877952).
Number of object additions: 168
Number of object modifications: 4
Number of object deletions: 0
Number of object renames: 3
Number of references processed / dropped: 58, 7
Maximum number of attributes seen on a single object: 18
Maximum number of values retrieved via range syntax: 0
Beginning aging run. Aging requested every 0 runs. We last aged 1 runs ago. Saving Configuration File on DC=SyncTargetDC Saved configuration file.
Alright, so time to slice it up some. This post would go on forever if I sliced it up too much, so I’ll point out some of the highlights.

Of course, we start off with just a bit of overview of what we’re about to do. Version of the tool, host we’re talking to, etc. Nothing out of the ordinary. :)

Next up, object sync. We will have an entry for objects synchronized. Note the GUID that is listed. While this might not look like the sort of GUID you’re used to, it is actually the GUID of the object in the source NC. There are two potentially confusing things about what I just said, so I’d like to call them both out so we’re all on the same page, else things will only get worse from here :):
1) It is the GUID despite looking like it is wrong. Note the form:
51b2fc571aa27c4e9a488e7b79d1d5e1
The GUID of that object when all prettied up is:
57fcb251-a21a-4e7c-9a48-8e7b79d1d5e1
Here’s the magic to the conversion. Let’s look at just the first section:
57fcb251 vs. 51b2fc57
See the pattern yet? Let’s color it in:
57fcb251 vs. 51b2fc57
Yes, there’s some ordering going on. I won’t get in to why here, but that’s how we pretty up GUIDs before we display them. With that example I’m sure you can walk through the rest of it and figure out where it comes from.
2) It is the GUID of the object in the source NC. One of the confusing things about synchronization is that we’re doing logical recreation of some data state in the target environment based upon what we see in the source environment. As a result, some properties aren’t the same. One such property is the GUID. The “copy” of the object in the target will have a different GUID, because all we’re really doing in ADAMSync is telling ADAM to create an object with the following logical properties (name, description, etc.)…namely those properties we care about (you get to pick the list)...and letting the stuff going on at the directory layer do it’s thing. So much like we didn’t tell AD the GUID of the object in the source, we don’t tell ADAM what the GUID should be in the target.
The result of this subtle yet important distinction is that when thinking about tasks ADAMSync is doing it is exceptionally important to consider if the task is relative to the source or the target. Things get awfully confusing awfully fast if you don’t.

We then get in to deferred dn references. ADAMSync processes things like link value attributes later such that all objects are already created when it is time to create the links. We can revisit this later in more detail if people are interested in the subtleties of what and how and why.

And finally, closing statistics with a little note that we succeeded.
Errors will be painstakingly obvious. We’ll probably start seeing some as we start modifying our config file in some crazy ways over the next few posts. I’ll try and include the common errors so you get a sense of what I think is likely that you’ll hit as well as my general methodology in approaching these sorts of errors.

Update: Fixed some formatting issues I didn't notice before.

Syncing to our OU=SyncTargetOU NC instead

$
0
0

Earlier in this series of posts I changed our sync target from form “OU=” to “DC=”. This was done to carefully skirt around a small issue. Now with our new found knowledge of logging in ADAMSync, let’s give it another try.

So let’s go ahead in to our previous config file and change this line:
     <target-dn>dc=SyncTargetDC</target-dn> 

To read:
     <target-dn>ou=SyncTargetOU</target-dn> 

Of course, this hopefully means you have created one such NC in your ADAM environment.
From there, I went ahead and installed and ran my config:

C:\WINDOWS\ADAM>adamsync /install localhost:50000 ADAMSyncDemo.XML
Done.

C:\WINDOWS\ADAM>adamsync /sync localhost:50000 ou=synctargetou /log OULog1.txt
And when I cracked open the log, there were all sorts of errors.

Processing Entry: Page 2, Frame 1, Entry 22, Count 1, USN 0 Processing source entry <guid=ba50fb2e1bdd53468913b5d023460185> Processing in-scope entry ba50fb2e1bdd53468913b5d023460185. Adding target object CN=Builtin,ou=SyncTargetOU. Adding attributes: sourceobjectguid, objectClass, instanceType, showInAdvancedViewOnly, creationTime, forceLogoff, lockoutDuration, lockOutObservationWindow, lockoutThreshold, maxPwdAge, minPwdAge, minPwdLength, modifiedCountAtLastProm, nextRid, pwdProperties, pwdHistoryLength, uASCompat, lastagedchange,  Ldap error occured. ldap_add_sW: Naming Violation.  Extended Info: 00002099: NameErr: DSID-03050F78, problem 2005 (NAMING_VIOLATION), data 0, best match of:
 'ou=SyncTargetOU'
. Ldap error occured. ldap_add_sW: Naming Violation.  Extended Info: 00002099: NameErr: DSID-03050F78, problem 2005 (NAMING_VIOLATION), data 0, best match of:
 'ou=SyncTargetOU'
.
First, notice that ADAMSync gives you all of the error text that ADAM returned to it. This is critical data.
So the question is, why did we fail?
Looking at the error in more detail:
Extended Info: 00002099: NameErr: DSID-03050F78, problem 2005 (NAMING_VIOLATION), data 0, best match of:

The most interesting piece of data here is the 2099, which maps to:

C:\>err 2099
# for hex 0x2099 / decimal 8345 :
  ERROR_DS_ILLEGAL_SUPERIOR                                 winerror.h
# The object cannot be added because the parent is not on the
# list of possible superiors.
And that’s the problem.
This entry was an attempt to create the object CN=Builtin under the parent object of OU=SyncTargetOU. This makes sense, we asked it to go in to OU=SyncTargetOU.
CN=Builtin is an object with an objectClass of builtinDomain:
      >> Dn: CN=Builtin,DC=erictest,DC=local
       2> objectClass: top; builtinDomain;
OU=SyncTargetOU is an OrganizationalUnit (specified when we created the NC in dsmgmt). We need to make it such that OrganizationalUnit is a possSuperior of builtinDomain. More info on possSuperiors can be found here.

So anyway, after making that change, I ran sync again…..
Finished (successful) synchronization run.
Number of entries processed via dirSync: 168
Number of entries processed via ldap: 2
Processing took 13 seconds (0, 1085446656).
Number of object additions: 52
Number of object modifications: 118
Number of object deletions: 0
Number of object renames: 112
Number of references processed / dropped: 58, 7
Maximum number of attributes seen on a single object: 18
Maximum number of values retrieved via range syntax: 0
As we said earlier, most ADAMSync failures are schema problems. :)

Update: Corrected a small typo in my before target DN. Thanks!

Synchronizing only the attributes you really want

$
0
0

In our previous ADAMSync runs we synchronized all attributes except those in the <exclude> tags. This is probably ok for our tinkering, but in a real scenario, you might want to consider picking those you want instead of getting everything but those you say not to.

Why? Well, consider the costs. If you synchronize everything, you’re paying the costs for all of those attributes (cost for lookup in AD, shipping them over the wire, writing them in to ADAM, storage in ADAM, etc.). If you only synchronize what you need you save on those costs while still servicing what you need in your application. And of course, you can always change your mind later. :)

The one tricky thing about this operation is picking the attributes you need. Consider that for some set of classes you’re creating, there is a minimum set of attributes that each class will require in order to be created properly. Should you miss some of them, you will get errors such as this one:

Processing Entry: Page 2, Frame 1, Entry 65, Count 1, USN 0
Processing source entry <guid=09e91eb3653f004fb8f8350d6ef2d577>
Processing in-scope entry 09e91eb3653f004fb8f8350d6ef2d577.
Adding target object CN=Domain System Volume (SYSVOL share),CN=NTFRS Subscriptio
ns,CN=EFLEIS-DF2,OU=Domain Controllers,ou=SyncTargetOU.
Adding attributes: sourceobjectguid, objectClass, instanceType, lastagedchange,

Ldap error occured. ldap_add_sW: Object Class Violation.
Extended Info: 0000207C: UpdErr: DSID-0315116B, problem 6002 (OBJ_CLASS_VIOLATIO
N), data 0
And 207C maps to:

C:\>err 207C
# for hex 0x207c / decimal 8316 :
  ERROR_DS_MISSING_REQUIRED_ATT                             winerror.h
# A required attribute is missing.
So this can be tougher than it first appears. For my test environment (as mentioned previously, a fresh win2k3 forest), the following set of attributes was enough. But perhaps you will need more. If so, note the object that failed, and check out the attributes required on that object. Make sure you include all of them.

With that having been said, let’s go ahead and trim our attribute set down a bit. I’ll go ahead and only retain a subset of the attributes.
I’ll change this section:
<attributes>   
    <include></include>   
    <exclude>extensionName</exclude>
    <exclude>displayNamePrintable</exclude>   
    <exclude>flags</exclude>   
    <exclude>isPrivelegeHolder</exclude>   
    <exclude>msCom-UserLink</exclude>   
    <exclude>msCom-PartitionSetLink</exclude>   
    <exclude>reports</exclude>  
    <exclude>serviceprincipalname</exclude>
    <exclude>accountExpires</exclude>
    <exclude>adminCount</exclude>
    <exclude>primarygroupid</exclude>
    <exclude>userAccountControl</exclude>
    <exclude>codePage</exclude>
    <exclude>countryCode</exclude>
    <exclude>logonhours</exclude>
    <exclude>lockoutTime</exclude>
   </attributes> 
To be:
   <attributes>    
    <include>description</include>    
    <include>frsstagingpath</include>
    <include>fRSRootPath</include>
    <include>sourceObjectGuid</include>
    <include>lastAgedChange</include>
    <exclude></exclude>
   </attributes>
Here’s where this list came from….
I first just decided I wanted object descriptions.
Then, I gave it a run. It complained with the error previously discussed. So I went to the class definition for the object and included the list of must contain attributes.
The last two attributes (sourceobjectguid and lastagedchange) are ADAMSync attributes themselves. These are used for internal tracking. So I went ahead and included them.

And with a little luck, it’ll work out just as well for you as it did for me.

Finished (successful) synchronization run.
Number of entries processed via dirSync: 169
Number of entries processed via ldap: 3
Processing took 10 seconds (0, 1085404416).
Number of object additions: 168
Number of object modifications: 4
Number of object deletions: 0
Number of object renames: 3
Number of references processed / dropped: 0, 0
Maximum number of attributes seen on a single object: 6
Maximum number of values retrieved via range syntax: 0

 

ADAMSync can also transform users in to proxy users

$
0
0

Now that we have ADAMSync synchronizing our data over, we should probably investigate the most commonly asked for transformation: proxy user transformation.

When we introduced proxy bind in ADAM RTM, customers seemed to really connect with the semantic. If anything, I’d argue we have customers overusing proxy bind! But that’s a conversation for another day.

However, the introduction of proxy bind opened a bit of a management scenario that had not yet been seen. When you use proxy bind, you must first have created the objects to which you will proxy bind. That means one need create and maintain these new objects in the ADAM environment which correspond with the AD users. This seems like a natural scenario for ADAMSync.

In RC0 we enabled what we typically call “user to userProxy transformation.” This transformation is simple….one can take users being synchronized and create proxy users out of them. These proxy users may be of the Microsoft defined proxy user class (called userProxy) which has shipped with ADAM since RTM, they could be of the new class we added to R2 to help with this (userProxyFull), or they could be some custom class you have implemented (any class defined with an aux class of msds-proxybind will have the proxy behavior). We allow you to tweak this in your configuration file.

So let’s give it a try.
Before doing anything, we need to get a class defined that leverages the proxy bind functionality. For the sake of simplicity I’ll use the one that ships with ADAM:

C:\WINDOWS\ADAM>ldifde -i -f MS-UserProxy.LDF -s localhost -t 50000 -c "cn=configuration,dc=x" #configurationNamingContext
Connecting to "localhost"
Logging in as current user using SSPI
Importing directory from file "MS-UserProxy.LDF"
Loading entries....
3 entries modified successfully.

The command has completed successfully
Time for ADAMSync itself….
I’m going to go ahead and change the heart of the sync file as follows (things modified in red):
<?xml version="1.0"?>
<doc> 
 <configuration>  
  <description>sample Adamsync configuration file</description>  
  <security-mode>object</security-mode>        
  <source-ad-name>erictest.local</source-ad-name>  
  <source-ad-partition>dc=erictest,dc=local</source-ad-partition>
  <source-ad-account></source-ad-account>               
  <account-domain></account-domain>
  <target-dn>ou=SyncTargetOU</target-dn>  
  <query>   
   <base-dn>dc=erictest,dc=local</base-dn>
   <object-filter>(objectCategory=person)</object-filter>   
   <attributes>    
    <include>objectSID</include>    
    <include>sourceObjectGuid</include>
    <include>lastAgedChange</include>
    <exclude></exclude>
   </attributes>  
  </query>
  <user-proxy>
    <source-object-class>user</source-object-class>
    <target-object-class>userProxy</target-object-class>
  </user-proxy>
 
  <schedule>   
   <aging>    
    <frequency>0</frequency>    
    <num-objects>0</num-objects>   
   </aging>   
   <schtasks-cmd></schtasks-cmd>  
  </schedule> 
 </configuration> 
 <synchronizer-state>  
  <dirsync-cookie></dirsync-cookie>  
  <status></status>  
  <authoritative-adam-instance></authoritative-adam-instance>  
  <configuration-file-guid></configuration-file-guid>  
  <last-sync-attempt-time></last-sync-attempt-time>  
  <last-sync-success-time></last-sync-success-time>  
  <last-sync-error-time></last-sync-error-time>  
  <last-sync-error-string></last-sync-error-string>  
  <consecutive-sync-failures></consecutive-sync-failures>  
  <user-credentials></user-credentials>  
  <runs-since-last-object-update></runs-since-last-object-update>  
  <runs-since-last-full-sync></runs-since-last-full-sync> 
 </synchronizer-state>
</doc>
The new user-proxy section is what defines the transformation.
One can transform….well, anything to anything! :) So long as you are going from some sort of security principal in the source to a proxy user in the target, it’ll fly right along. I’m using userProxy just to keep it simple. Note that I also included objectSid as proxy users require the SID to be specified. Finally, I changed my search filter to look for object with an objectCategory=person just to isolate exactly what I wish to import.

C:\WINDOWS\ADAM>adamsync /install localhost:50000 ADAMSyncDemo.XML
Done.

C:\WINDOWS\ADAM>adamsync /sync localhost:50000 "ou=synctargetou" /log –

<chopped for brevity>

Finished (successful) synchronization run.
Number of entries processed via dirSync: 6
Number of entries processed via ldap: 1
Processing took 0 seconds (0, 1080131584).
Number of object additions: 7
Number of object modifications: 0
Number of object deletions: 0
Number of object renames: 0
Number of references processed / dropped: 0, 0
Maximum number of attributes seen on a single object: 5
Maximum number of values retrieved via range syntax: 0

And sure enough, when I go to look for some of the users I know should be there…..

>> Dn: CN=Administrator,CN=Users,OU=SyncTargetOU
 3> objectClass: top; syncEngineAuxObject; userProxy;
 1> cn: Administrator;
 1> distinguishedName: CN=Administrator,CN=Users,OU=SyncTargetOU;
 1> instanceType: 0x4 = ( IT_WRITE );
 1> whenCreated: 09/23/2005 10:59:13 Pacific Standard Time Pacific Daylight Time;
 1> whenChanged: 09/23/2005 10:59:13 Pacific Standard Time Pacific Daylight Time;
 1> uSNCreated: 23644;
 1> uSNChanged: 23644;
 1> showInAdvancedViewOnly: TRUE;
 1> name: Administrator;
 1> objectGUID: 613813f4-f8cf-44ba-887b-aae4cb128580;
 1> objectSid: S-1-5-21-980059532-776183279-2334900600-500;
 1> objectCategory: CN=User-Proxy,CN=Schema,CN=Configuration,CN={B57A6E49-957D-434C-8584-9AA3D3946EF0};
 1> sourceObjectGuid: P upy?B$6 ;
 1> lastAgedChange: 20050923175913.0Z;
One of the most commonly made mistakes is forgetting to include objectSID so please do include it! If you don't, you'll get the missing attribute error we have seen before.

ADAMSync on pre-R2 systems

$
0
0

From the inbox.....

<quote>
You say the requirements are Win2003R2 with the ADAM installed from the R2 CD.
Will your guides work for us aswell? Running Win2003 SP1 in AD-environment and our developers are running standard Windows XP with the ADAM downloaded from MS download site?
</quote>
Good question! From a technical perspective, pointing ADAMSync at a pre-R2 ADAM is totally ok. And of course, syncing from an AD that has no DCs running R2 is a non-issue. We didn't take any dependency on anything in R2. That said, the following should be noted:
1) To the best of my knowledge, pointing at a pre-R2 ADAM is not an explicitly tested scenario. So perhaps there is some issue lurking here we're not aware of. I would categorize this as "exceptionaly unlikely", but I guess you never know.
2) There might be licensing ramafications to using ADAMSync if you don't use it on an R2 box. I'm not a licensing guy, nor do I play one on TV. I would recommend asking a licensing person this question.

 

"VGA"-like drivers for networking

$
0
0

One of the things that has always impressed me about keyboard, mouse and monitor support is that it just works. That is, you can plug in almost any keyboard, mouse and monitor, on basically any video card, and it there is some level of support provided by your OS/BIOS/etc. Independent of the OS. Independent of the generation of hardware.

This makes perfect sense. Without at least two of these three (some might argue that the mouse is optional, but I would disagree when you consider non-advanced users), you can’t bootstrap the system. You need to see something in order to load a better video driver. This was a feature born out of necessity.

 

Times have changed. User expectations around their PC have changed. Networking is mainstream now, not just for corporations. The # of high speed internet connections in the home is growing. Many, many people have and use NICs on a daily basis.

 

My wish

I think we should have a similar, fallback mechanism to bootstrap NICs to get users online. In the absence of a driver, I should not be required to find a floppy disk (if my PC even has a floppy drive….my new one at home does not to my knowledge) or burn a CD on another machine to get a good driver. I should be able to get online with basic functionality on a basic connection. I further expect my cable modem / dsl / etc. to work in this configuration. I expect this much to work. Then we can let something like WU or downloads from the manufacturers website give us the better driver.

 

I don’t know anything about writing drivers, so I ask you, the universe. Why does this not exist?

Large AD database? Probably not this large...

$
0
0

Over the last few months there have been a series of threads in regard to max <fill in the item here...there have been many> in a database. These items have ranged from database size to # of objects and other such things. I figured, after the latest thread over on activedir.og, I'd do a little testing and put some numbers behind it so we could say "we have done this" and not "the system should do this."

 

What should this testing accomplish?

 

First, raw DB size. Gotta create a big DB or it probably doesn't matter.

 

Next, # of objects. For my testing, this was the real metric I was interested in. As mentioned over on ActiveDir (I would provide a link to the thread but I can’t seem to get the mail archives to work right now…I’ll try and provide one later), there is a theoretical max # of objects in the lifetime of a database which is, all said and done, 2^31 objects. I wanted to shoot for this. After all, Dean asked what error you would get, and I didn’t know. :)

 

I wrote a tool which started banging against an ADAM SP1 x64 instance. It was creating pretty small objects as I wanted reduce the amt of time this test took. My objects looked like this:

                    dn: cn=leafcontX,cn=parentcontY,cn=objectsZ,ou=objdata

                    changetype: add

                    objectclass: container

(Of course, sub in values for X, Y and Z as appropriate)

I had it use anywhere from 16 to 40 threads for this work depending upon the phase of import, and I simply wrapped around ldifde for it….I figured, there is a well tested tool for this, why not let it do most of the hard work?

 

Next, I got my hands on a test box (thx EEC!), put it on a SAN, installed ADAM, and away I went.

 

Along the way, we did a few other perf tests (looking at increased checkpoint depths and the like) so it added a bit of time to the import. However, after about a month, I had nearly filled my 2TB partition:

06/08/2006  10:41 AM 2,196,927,299,584 adamntds.dit

 

I created just shy of 2^31 objects. When I went to create that next object (done here by hand in LDP to illustrate the error)…

***Calling Add...

ldap_add_s(ld, "cn=sample1,OU=ObjData", [1] attrs)

Error: Add: Operations Error. <1>

Server error: 000020EF: SvcErr: DSID-0208044C, problem 5012 (DIR_ERROR), data -1076

 

If you look up -1076, you’ll find it is JET_errOutOfAutoincrementValues (from esent98.h). Woo hoo! I ran out of DNTs.

 

With this DB in hand, it was time to find out what else works and what else does not…

-          Promotion of a replica fails. This makes perfect sense….it tries to create a couple of objects in the config NC, and that fails.

-          Create of an NC fails. Again, to be expected, this task consumes DNTs.

-          I ran esentutl /ms. It chugged for nearly 30 seconds, but worked perfectly.

-          I also ran esentutl /k to make sure the DB did not have any physical corruption, but also to just see how long that took. :)

-          Other standard tasks (kicking off garbage collection, online defrag, restarting the service, etc.) all worked perfectly.

-          Search works like a champ. Sure it takes a good bit of I/O for most interesting searches, but that’s to be expected, of course.

 

It is worth noting that anything which failed did so gracefully. There were no nastygrams in my event logs either.

 

So for those of you who are worrying….you can sleep well at night now. We have tried rolling over DNT, and it works just fine.

 

A fun stat…..from the esentutl /ms output:

Name                   Type   ObjidFDP    PgnoFDP  PriExt      Owned  Available

==============================================================

<EFleis – snip to save some space>

  nc_guid_Index         Idx         25         43     1-m   10870892          5

 

That owned number is in pages. That’s right, my NC_GUID index is 82.9GB…bigger than most databases. :)

 

While there were no major issues, we (Brett was looking at this too) did hit a few bumps along the way, and Brett was kind enough to write a few ESE tools for me to help monitor how we were doing. I’ll outline all of these things over the next few days as I have time to write them up. I’ll also provide more clarity around specific of what we did and saw as we went along.

 

Garbage collection & TSL warnings...why now?

$
0
0

I was recently pinged by a friend who is rolling out LH in their production environment. They were having an interesting issue where the LH DC showed these two events, in this order:

(event log entries snipped some for brevity)

 

Log Name:      Directory Service

Source:        NTDS General

Event ID:      1859

Task Category: Garbage Collection

Level:         Warning

User:          ANONYMOUS LOGON

Description:

Internal event: The current garbage collection interval is larger than the maximum value.

 

Current garbage collection interval (hours):

40000

Maximum value:

168

New value:

168

 

 

Log Name:      Directory Service

Source:        NTDS General

Event ID:      1088

Task Category: Internal Configuration

Level:         Warning

User:          ANONYMOUS LOGON

Description:

Internal event: The following tombstone lifetime registry value is too low or incompatible with the following garbage collection interval specified in the Active Directory Domain Services Configuration object. As a result, the following default registry values for the tombstone lifetime and garbage collection will be used.

 

 

This is an old domain which has been around for years. The win2k3 boxes are chugging along without a problem…only the LH boxes were throwing this event. So, the question is, what’s going on?

 

I took a peak at this for this friend and came up with the following. Thought I’d share it in case others see this too.

 

First event, the GC one. Well, the event told you the problem. J The GC interval is set to 40000 hours. That’s roughly “a long time.” GC interval is huge, we didn’t like it. We enforce in the product that GC is no greater than a week, in hours. So we went ahead and, in memory, set this to a week.

NOTE: We do not change the setting in the DS, we simply ignore it and use our own. So the setting stays wrong in AD.

 

Next event, the TSL event. This is the more interesting event. Well, we check a few things here. First, we make your TSL is greater than the min….check, this customer had a value which was ok (I forget, but it was somewhere in the teens). Next we check if the TSL is greater than 3 times the GC interval. You failed. Your GC interval is now 7 days (remember before I said it would set it to a week as your GC interval was nonsensically huge) but your TSL is not 3 times that. So we freak and show you the event. We further set TSL to the default in code, which is 60 days. Again, no config change, just in memory change.

 

Last item…why now? What changed? Why did this customer not get this before on 2k3? No values changed, after all.

This code is largely identical except for a small change. In 2k3 the code for this event firing code throws the event if GC logging is set to 2 or above. In LH, it fires the event if logging is set to 0 or above. Since all of the customer’s machines were set to 0, this explains why 2k3 didn’t throw an event whereas LH did.

(Personally, I’m happy about this change…I think we should have always flagged this condition. It’s weird, we should have always drawn your attention to it.)


Finding the lost&found container in S.DS.P...or anything that isn't ADSI really

$
0
0

I found myself writing a piece of C# which would go hunt for objects in lost&found today. This is a pretty straight forward task….find that container, pop in to it and search away. I usually do this by looking at the lost&found well known GUID (which is GUID_LOSTANDFOUND_CONTAINER_W in the platform SDK) then just crafting the search by hand.

 

Anyway, I was feeling particularly lazy today so I went to take a quick look at MSDN and just use their sample. Much to my surprise the only examples I could find did this via ADSI and talked about “binding to well-known objects using WKGUID.” (I’ve NEVER been a fan of the use of terms there…I don’t like how we say “binding to an object” in ADSI as that is a somewhat unnatural construct given what’s really going on under the hood. Further, it makes moving to wldap32/S.DS.P/etc. harder as the terms are different. If only I ruled the world.....) Given this, I figured I’d paste some sample code here.

 

First, the LDAP work itself. To do a search for L&F, you want to craft a search as follows:

            Base DN: <WKGUID=GUID_LOSTANDFOUND_CONTAINER_W,dc=someNC,dc=com>

            Search filter: (objectclass=*)

            Search scope: Base

To dissect that baseDN some…..WKGUID means “well known GUID”, instead of the string I pasted in italics (GUID_LOSTANDFOUND_CONTAINER_W) you would of course want to actually use the GUID there, and then after the comma you put the DN of the naming context you wish to search (for example, dc=mydomain,dc=com).

 

So here’s a quick and dirty piece of C# written against System.DirectoryServices.Protocols APIs that would get this DN for you. Please note that this is sample code so you really want to robustify it some before actually putting this in an application, and clean up the little foreach I have there just because I’m being lazy. ;)

 

            string lAndFBaseDN = String.Format("<WKGUID={0},{1}>", GlobalVars.GUID_LOSTANDFOUND_CONTAINER_W, myNCDN);

            string[] attrList = {"dn"};

            string searchFilter = “(objectclass=*)”

            SearchRequest sr = new SearchRequest(lAndFBaseDN,searchFilter,SearchScope.Base,attrList);

            SearchResponse searchResponse = (SearchResponse)myLdapConnection.SendRequest(sr);

            if (searchResponse.Entries.Count == 0)

            {

                // Must be no l&f container in the target dn (like, target not an NC for example)

            }

            Debug.Assert(searchResponse.Entries.Count == 1, "More than one L&F container found. This is weird.");

            // If there is more than one, we just return the first. But there should not be.

            foreach (SearchResultEntry sre in searchResponse.Entries)

            {

                return sre.DistinguishedName;

            }

 

I think it goes w/o saying but you can apply similar logic to other well known GUIDs. I just picked l&f as it was convenient. ;)

Change visibility in the directory...or lack there of (aka "what's the point of aging?")

$
0
0

I’m often asked about aging in adamsync so I thought I’d present the more general problem here for people to ponder. Hopefully this gives some context around the problem which aging in adamsync is supposed to address.


Imagine you are writing a tool which sync’s changes out of AD. You (the person running this tool) have some set of permissions…whatever they may be. You are syncing along happily.

One day you get a phone call…”My user was moved from OU=bar in to OU=foo yet the sync target still shows me in OU=bar. What gives?” You begin to investigate only to find out that you don’t have permissions to OU=foo. As a result, you don’t have any of the objects in OU=foo in your target location. The reason is straight forward….you don’t have permissions to the target, so when the object moved from bar to foo you never saw this change. You couldn’t see this change! You didn’t have permissions to OU=foo.


This is one of many such cases. If you don’t have the ability to see some object in the target location, it is hard to say anything about your view of it from the source. You could still have the object in the source location and have no idea that it moved out of your view. The reason is of course straight forward….you can’t see the target so you didn’t see that mod and we don’t have any construct where the source can say “out of your purview but not here anymore.” So you simply don’t realize the object has changed.


Historically, this was not nearly as much of a problem. Most people use DirSync to sync changes out of AD. In Win2K, in order to use DirSync you needed to be a domain admin. So, you could see most  things that happen (out of the box anyway). In Win2k3 we built a feature for DirSync that made this problem more common….DirSync object security mode. In this mode anyone can use DirSync to sync out of any partition they so choose, and DirSync only shows changes for objects you have access to see. This is a very useful feature.


So now let’s consider adamsync, a simple DirSync client, and the problem I've mentioned above. When we wrote adamsync we wanted to ensure that we could handle the scenario where you are not an admin and want to sync data out. So, we default the tool to object security mode. This is fairly convenient for non-admins that wish to use the tool.


However, consider a very mainstream case. You are using adamsync to sync objects out of some domain NC. Objects are deleted. You don’t have permissions to see the deletions (remember I said that you “lose changes” when you move an object to a place you don’t have perms to? Well normal users don’t have permissions to the deleted objects container out of the box. So it’s a very common mainstream case for this problem….). As a result, you never reflect the deletion in your target container in ADAM. You’re woefully out of date.


One fix for this problem could be that you just give more perms. In the deleted objects case, just give the user who is syncing permissions to read the deleted objects container. But some people might not find that acceptable, depending upon their scenario.


Enter aging. We wrote aging to be a periodic background thread that goes and checks to make sure objects which we haven’t seen change in a while are actually still there. So you can imagine that every now and then you go back and check to ensure that all objects you have in the target are still in the source. This is the aging approach. While the specifics are configurable, that’s the basic idea.


Aging is just one such mechanism. There are lots of approaches to this problem that one could consider. It’s just the one we chose for adamsync.


One minor point I’ll raise before ending this post. Aging in ADAMSync in R2 is unfortunately not working properly. There is a bug that basically breaks it in some cases. It’s hard to say when but you should assume you’ll hit it at some point….no idea if you will, but you never know. So if you need aging pre-LH (ie, you have a compelling scenario where you want to sync as a non-admin) please open a QFE request with PSS. Or just give perms to deleted objects for now (or whatever the container is which you can't see)…a much easier quick-fix.


(just updated some formatting)

Constructed attributes are your friend

$
0
0

The schema itself has a whole lot of interesting nuances. Within the schema we define multiple different types of attributes. One of the most useful attribute types we have might just be the constructed attribute.

 

Constructed attributes are interesting in that they aren’t a single value in the database which is read and returned. When you read a constructed attribute, rather than retrieving a value from the database, we perform a calculation of some sort and return the result of that calculation. The idea of constructed attributes is to provide the LDAP client reading them with some information that would otherwise be very difficult to obtain. Let’s look at an example.

 

Often times people would like to look at metadata on an object that is used during replication. Perhaps you are writing tools to monitor replication (hey, it could happen J) and see this information as of use to you. For this purpose we created two attributes: msDS-ReplAttributeMetaData and msDS-ReplValueMetaData. First, let’s look at the schema definition for one of these attributes (partial definition):

 

Dn: CN=ms-DS-Repl-Value-Meta-Data,CN=Schema,CN=Configuration,DC=domain,DC=local

            1> attributeID: 1.2.840.113556.1.4.1708;

            1> attributeSyntax: 2.5.5.12;

            1> isSingleValued: FALSE;

            1> oMSyntax: 64;

            1> lDAPDisplayName: msDS-ReplValueMetaData;

            1> systemFlags: 0x14 = ( FLAG_ATTR_IS_CONSTRUCTED | FLAG_SCHEMA_BASE_OBJECT );

 

 

Note the element I placed in red (one of the flags set, LDP parsed this out for me). This flag tells us that the attribute is constructed.

 

If I visit an object in my directory and request this attribute be returned, I get something like:

>> Dn: CN=Builtin,DC=domain,DC=local

            26> msDS-ReplAttributeMetaData: <DS_REPL_ATTR_META_DATA>

            <pszAttributeName>isCriticalSystemObject</pszAttributeName>

            <dwVersion>1</dwVersion>

            <ftimeLastOriginatingChange>2003-01-12T02:13:38Z</ftimeLastOriginatingChange>

            <uuidLastOriginatingDsaInvocationID>8ee69288-cf81-4b36-898c-f4bf3de88cab</uuidLastOriginatingDsaInvocationID>

            <usnOriginatingChange>8201</usnOriginatingChange>

            <usnLocalChange>7160</usnLocalChange>

            <pszLastOriginatingDsaDN></pszLastOriginatingDsaDN>

</DS_REPL_ATTR_META_DATA>

 

(I trimmed this output some for the sake of the length of this post)

Note that this particular attribute is returned in XML format. Some constructed attributes are returned as XML, some in other formats. But all are returned in a way so as to facilitate you parsing them if required, and just read them normally if they are a simple data type of some sort (int, etc.)

Of course, like other attributes in the directory, we define these elements on MSDN: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/adschema/adschema/a_msds_replattributemetadata.asp

 

So with all of that said, what are the caveats? Not many, but there are a few:

  • Constructed attributes are only returned if you request them. If you simply query for all attributes (*) on an object, these attributes will not be returned. One must explicitly request them if they wish to see them.
  • Constructed attributes are only returned when a base search is used against an object. If you perform a subtree search against an entire group of objects and request this attribute, it will not be returned for all objects in the subtree.
  • One can not use a constructed attribute in a search filter. That is to say, if you were to search for all objects that match a filter (msds-replattributemetadata=foo), we would be unable to evaluate that expression properly.

It's all about the little things

$
0
0

I love little things. Here is one I noticed this morning.

 

Go ahead and empty your recycle bin. Now create and delete two files, then go to empty the bin again. Note that it says “would you like to delete these 2 files”.

 

Ok, now empty the bin, and only delete one file. Now try and empty the bin again. Note that it doesn’t say the number, it gives you the file name that you’re about to empty. If there is only one element to be removed, we show the name of the element rather than the number.

 

Who thought to do that?

Man, I wish we had this feature......

$
0
0

Sometimes I just thirst for a feature someone else has.

In talking with an MVP on the phone tonight we got to talking about ESE, and chatted a bit about the new ECC feature introduced in Exchange 2003 SP1 to help with -1018's. Info on it is here if you're interested.

I sure do wish we had that in AD. One day.....

Blech, I was wrong about something I said

$
0
0

Some time ago I posted on constructed attributes. There was a snafu in an item I posted (and by snafu I mean mistake).
I said that constructed attributes are only returned in base searches. This is incorrect in the general case. This is true for some constructed attributes (like tokenGroups) but not true for many others, including the example I posted! I should have at least tested to be sure my statement was correct for my example. Hindsight is always 20:20 I guess. :)

I hate wrong info, sorry to have passed it along.

Another access based enumeration mention.....

$
0
0

WS2003 Service Pack 1 brings with it all sorts of goodies. I’ll try and mention many of the AD ones over the next few weeks, but one I can’t help but mention sooner rather than later is access based enumeration.

Rather than digging in to the feature, let’s instead start with a scenario.

Assume you have some server, and on that folder you have a share named ‘Users’. In the Users folder is a subfolder for each user that uses this particular server. Each user’s folder is ACL’d such that no one can get in to it but the user in question. From a security perspective, this is pretty safe.

That said, some customers wanted more. They wanted to be able to actually prevent one user from seeing another user’s folder. This way, you wouldn’t even know what other users had a folder on the given server. Enter access based enumeration.

From an implementation perspective, what access based enumeration does is pretty simple: when a user enumerates the parent folder (as the case may be, \\MyServer\Users) it will do an extra check for each subfolder before returning that folder to the user. Before access based enumeration, we would see if you had the appropriate perms on the Users folder, and if so return all contents. With access based enumeration enabled, we further check each folder in the share, to ensure you have the required perms on that folder itself, and only for those folders that you have permissions to do we return the folder in the list.

Simple, yet we didn’t do it in the past. We’ll, you have it now. :)
That said, there are a few things to keep in mind here.

First, it is independent of the client. This is entirely server-side. This is nice, as who knows what OS / SP your client is running. So don’t worry, it’s just a server side service pack you need to apply.

Second, there is a performance hit here. Perhaps not huge, but some. We used to have to check one ACL (the parent folder, aka the \Users share) before returning the list. Now we have to actually walk each subfolder as well, and check those ACLs too. Probably not terrible, but it is worth keeping in mind, especially on heavily loaded servers where you plan to do this on some folder with many thousands of folders in a given parent folder. One thing that helps from a performance perspective, though, is that once we talk the list once, you do have the file system cache to help you.

Third, this is unfortunately not manageable from the UI. There are tools around that let you set this special ACL, but the object picker UI (object picker is the name of the ACL UI you see when you decide to set an ACL on a folder) itself won’t let you do this. Perhaps one day. :)

That’s all on that. While written about here and there, I thought I’d raise further awareness as to the presence of this new feature. It’s a good one.

Update (4/19/05) - We have a tool that controls ABE on the web: http://www.microsoft.com/downloads/details.aspx?FamilyID=04A563D9-78D9-4342-A485-B030AC442084&displaylang=en


Has anyone run AD (or ADAM) where the db and logs are on an iSCSI device?

$
0
0

Over on his blog, my mgr Bryce mentions the new iSCSI evaluation going on here in the EEC. I for one don't really understand iSCSI at all. I'm not a hardware guy. :)
However, I do like to talk about directory performance, and I've never seen either AD or ADAM deployed on an iSCSI device. I am going to try and get my hands on some gear and try it out while it is here and see what the numbers look like.

Has anyone else ever tried it? Experiences to share?

Go from 32bit Windows to 64bit for no cost? Really?

$
0
0

Today we put out this page which talks about upgrading to the x64 Windows. Here’s the best part:

The Windows XP Professional x64 Edition Technology Advancement Program enables customers who have purchased Windows XP Professional (32-bit) to exchange it for Windows XP Professional x64 Edition. You will need to have an x64 processor (the AMD Athlon 64, AMD Opteron, Intel Pentium 4 with EM64T, or Intel Xeon with EM64T) to run the new software.

Woo hoo! Go get your x64 Windows ASAP if you’ve got the hardware. :)
Note the date! It says you need to do this by July 31, 2005. So get ordering!

A little iSCSI time

$
0
0

Over on the EEC blog I just posted that I'll be working with iSCSI some. This is a new area for me, one I'm not really all that familiar with. Quite honestly, I'm not much of a hardware guy, so I suspect I'll be asking the hardware guys for help. :)

But I am a software guy, and I love working with fast hardware to see how well the software does. So this work with iSCSI should be a whole lot of fun. I'll be posting about it on the EEC blog, as it's more of an EEC thing than an EFleis thing. We'll see where it goes.

I had no idea Jim was blogging

$
0
0

I just learned that Jim Johnson is blogging. Very cool indeed. I'm a fan of that team's work.
I had the pleasure of meeting Jim not too long ago, and was really impressed with what that team is up to.

Jim recently blogged about some of their Longhorn work. Extending transactions to file system operations is just natural. I can't believe it's taken us this long. :)

On identity futures....

$
0
0
Kim Cameron has been blogging up a storm on identity futures in MS products. It's always interesting to think in to the future. Most of my work focuses around today and today++, not nearly as much in to the distant future, but I'm glad someone is looking out there. :)

InfoCard is a super interesting product that's getting some great coverage over on Kim's blog, as well as some other blogs he links to. I highly suggest checking it out.
Viewing all 60 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>