Wednesday, October 31, 2012 by Nate Bross
Someone asked on Stack Overflow:
Is there a way to get an IEnumerable<T> from an IEnumerable without reflection, assuming I know the type at design time?
I have this
foreach(DirectoryEntry child in de.Children)
{
// long running code on each child object
}
I am trying to enable parallelization, like so
Parallel.ForEach(de.Children,
(DirectoryEntry child) => { // long running code on each child });
but this doesn’t work, as de.Children is of type DirectoryEntries. It implements IEnumerable but not IEnumerable<DirectoryEntry>.
I posted the following answer, which was chosen as the accepted answer and received 6 upvotes:
The way to achieve this is to use the .Cast<T>() extension method.
Parallel.ForEach(de.Children.Cast<DirectoryEntry>(),
(DirectoryEntry child) => { // long running code on each child });
Another way to achieve this is to use the .OfType<T>() extension method.
Parallel.ForEach(de.Children.OfType<DirectoryEntry>(),
(DirectoryEntry child) => { // long running code on each child });
There is a subtle different between .Cast<T>() and .OfType<T>()
The OfType(IEnumerable) method returns only those elements in source that can be cast to type TResult. To instead receive an exception if an element cannot be cast to type TResult, use Cast(IEnumerable).
— MSDN
This link on the MSDN forums got me going the right direction.
Notable comments
Tim Schmelter (3 upvotes): @KooKiz: But pointless if you know that every element is a DirectoryEntry anyway. OfType filters and casts, so if you need to filter use OfType otherwise Cast.
Originally posted on Stack Overflow — 6 upvotes (accepted answer). Licensed under CC BY-SA.
Friday, September 28, 2012 by Nate Bross
Someone asked on Stack Overflow:
I want to validate some form fields in the server side, but I don’t want to use Data Annotations Custom Validators. I need to manually set its value based on the return of the called Business Layer Method to define this message.
Just as an example!
NEED:
If the given username already exists, the MVC4 validation error span shall display “This username already exists.”
CODE:
if (_business.UserNameExists(username))
{
// Set the field validation error span message
// HOW TO DO??
}
I posted the following answer, which was chosen as the accepted answer and received 15 upvotes:
A friend came with the solution, it is very simple!
if (_business.UserNameExists(username))
{
// Set the field validation error span message
ModelState.AddModelError("UserName", "This username already exists.");
}
Where UserName is the name of the Entity attribute being validated.
Originally posted on Stack Overflow — 15 upvotes (accepted answer). Licensed under CC BY-SA.
Monday, September 17, 2012 by Nate Bross
Someone asked on Stack Overflow:
I want to change in my .NET application login of user from Active Directory.
I’m changing it in this way now:
DirectoryEntry userToUpdate = updatedUser.GetDirectoryEntry();
userToUpdate.Properties["sAMAccountName"].Value = user.NewLogin;
userToUpdate.CommitChanges();
But it doesn’t work as I expect. When I’m checking in AD “Active Directory Users and Computers” entry for this user then on tab “account” I see that:
- “User logon name” property isn’t updated
- “User logon name (pre-Windows 2000)” property is correcly updated.
How to update correctly login name in AD from C# code? What property should I set in DirectoryEntry, or there is another method to change login name.
I posted the following answer, which was chosen as the accepted answer and received 7 upvotes:
There are two logon names in AD:
sAMAccountName = User logon name, (pre-windows 2000)
Format/Usage: domain\user.name (note, your code will only populate user.name)
userPrincipalName = User logon name
Format/Usage: user.name@domain.local
You need to update both.
Notable comments
Nate (0 upvotes): Correct, but when you login you must type domain\user (some apps put in the `domain` for you. I updated to make that clear.
Originally posted on Stack Overflow — 7 upvotes (accepted answer). Licensed under CC BY-SA.
Monday, September 17, 2012 by Nate Bross
Someone asked on Stack Overflow:
Is there a good way to monitor a phone’s location? I’m working on an app that lets you check in to places, but I want to automatically check the user out if their phone leaves the place. So the app would need to wake up every 10 or 15 minutes, whether the phone was locked or not, and compare its current location to the location of the last place it was checked in. If it’s not the same, it checks the user out.
The challenge is that the phone might be locked when the user leaves the location, and I don’t want to wait until the user unlocks their phone, or even worse, opens the app to update the location.
Is there a good way to do this in WP7?
I posted the following answer, which was chosen as the accepted answer and received 2 upvotes:
You will need to use the GeoCoordinateWatcher and the Background Tasks API. Using it in a background task causes it to use cached location data. This cache is updated every 15 minutes.
This API, used for obtaining the geographic coordinates of the device, is supported for use in background agents, but it uses a cached location value instead of real-time data. The cached location value is updated by the device every 15 minutes.
— MSDN
Originally posted on Stack Overflow — 2 upvotes (accepted answer). Licensed under CC BY-SA.
Thursday, September 13, 2012 by Nate Bross
Someone asked on Stack Overflow:
I have a DropDownFor on my View and I’m looking to create another DropDownFor only if a particular SelectList item from the first DropDownFor is selected.
To clarify, if my DropDownFor has two possible choices, “A” and “B”, and if “B” is selected, I want another DropDownFor to display on the page. If “A” is selected, I want nothing more to happen to the page.
How can I implement this?
I posted the following answer, which was chosen as the accepted answer and received 3 upvotes:
Something like this should do the trick:
script (using jQuery)
$(document).ready(function() {
$('#optionOne').change(function (){
if($(this).val() === 'b') {
$('#options').append("<select><option>newset</option></select>");
}
});
});
markup
<div id="options">
<select id="optionOne">
<option>a</option>
<option>b</option>
</select>
</div>
JSFiddle Example of the above code — http://jsfiddle.net/NpSPj/1/
Notable comments
Nate (0 upvotes): @MrOBrian I agree. This is a baseline proof of concept. Building the second one based on Ajax request, and/or hiding it initially and showing it on select would both be good options. This solution could be adapted to both options with a bit more effort and specific info based on requirements.
Originally posted on Stack Overflow — 3 upvotes (accepted answer). Licensed under CC BY-SA.
Friday, August 31, 2012 by Nate Bross
I asked this on Database Administrators:
As follow up to this question about increasing query performance, I’d like to know if there is a way to make my index used by default.
This query runs in about 2.5 seconds:
SELECT TOP 1000 * FROM [CIA_WIZ].[dbo].[Heartbeats]
WHERE [DateEntered] BETWEEN '2011-08-30' and '2011-08-31';
This one runs in about 33ms:
SELECT TOP 1000 * FROM [CIA_WIZ].[dbo].[Heartbeats]
WHERE [DateEntered] BETWEEN '2011-08-30' and '2011-08-31'
ORDER BY [DateEntered], [DeviceID];
There is a clustered index on the [ID] field (pk) and there is a non clustered index on [DateEntered],[DeviceID]. The first query uses the clustered index, the second query uses my non-clustered index. My question is two parts:
- Why, since both queries have a WHERE clause on the [DateEntered] field, does the server use the clustered index on the first, but not the second?
- How can I make the non clustered index be used by default on this query even without the orderby? (Or why would I not want that behavior?)
Edward Dortland answered (10 upvotes):
the first query does a table scan based on the threshold I earlier explained in: Is it possible to increase query performance on a narrow table with millions of rows?
(most likely your query without the TOP 1000 clause will return more then 46k rows. or some where between 35k and 46k. (the grey area ;-) )
the second query, must be ordered. Since you’re NC index is ordered in the order you want, it’s cheaper for the optimiser to use that index, and then to the bookmark lookups to the clustered index to get the missing columns as compaired to doing a clustered index scan and then needing to order that.
reverse the order of the columns in the ORDER BY clause and you are back to a clustered index scan since the NC INDEX is then useless.
edit forgot the answer to your second question, why you DON’T want this
Using a non clustered non covering index means that a rowID is looked up in the NC index and then the missing columns have to be looked up in the clustered index (the clustered index contains all columns of a table). IO’s to lookup the missing columns in the clustered index are Random IOs.
The key here is RANDOM. because for every row found in the NC index, the access methods have to go look up a new page in the clustered index. This is random, and therefore very expensive.
Now, on other hand the optimiser could also go for a clustered index scan. It can use the allocation maps to lookup scan ranges and just start reading the Clustered index in large chunks. This is sequential and much cheaper. (as long as your table isn’t fragmented :-) ) The downside is, the WHOLE clustered index needs to be read. This is bad for your buffer and potentially a huge amount of IOs. but still, sequential IOs.
In your case, the optimiser decides somewhere between 35k and 46k rows, it’s less expensive to to a full clustered index scan. Yeah, it’s wrong. And in a lot of cases with narrow non clustered indexes with not to selective WHERE clauses or large table for that matter this goes wrong. (Your table is worse, because it’s also a very narrow table.)
Now, adding the ORDER BY makes it more expensive to scan the full clustered index and then order the results. Instead, the optimiser assumes it’s cheaper to use the allready ordered NC index and then pay the random IO penalty for the bookmark lookups.
So your order by is a perfect “query hint” kind of solution. BUT, at a certain point, once your query results are so big, the penalty for the bookmark lookup random IOs will be so big it becomes slower. I assume the optimiser will change plans back to the clustered index scan before that point but you never know for sure.
In your case, as long as your inserts are ordered by entereddate, as discussed in chat and the previous question (see link) you are better of creating the clustered index on the enteredDate column.
Notable comments
Nate (0 upvotes): DateEntered is a DateTime, in this case I’m using the date part, but I sometimes query against both date and time together.
Originally posted on Database Administrators — 14 upvotes. Licensed under CC BY-SA.
Thursday, August 30, 2012 by Nate Bross
I asked this on Database Administrators:
I have a query that is currently taking an average of 2500ms to complete. My table is very narrow, but there are 44 million rows. What options do I have to improve performance, or is this as good as it gets?
The Query
SELECT TOP 1000 * FROM [CIA_WIZ].[dbo].[Heartbeats]
WHERE [DateEntered] BETWEEN '2011-08-30' and '2011-08-31';
The Table
CREATE TABLE [dbo].[Heartbeats](
[ID] [int] IDENTITY(1,1) NOT NULL,
[DeviceID] [int] NOT NULL,
[IsPUp] [bit] NOT NULL,
[IsWebUp] [bit] NOT NULL,
[IsPingUp] [bit] NOT NULL,
[DateEntered] [datetime] NOT NULL,
CONSTRAINT [PK_Heartbeats] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
The Index
CREATE NONCLUSTERED INDEX [CommonQueryIndex] ON [dbo].[Heartbeats]
(
[DateEntered] ASC,
[DeviceID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
Would adding additional indexes help? If so, what would they look like? The current performance is acceptable, because the query is only run occasionally, but I’m wondering as a learning exercise, is there anything I can do to make this faster?
UPDATE
When I change the query to use a force index hint, the query executes in 50ms:
SELECT TOP 1000 * FROM [CIA_WIZ].[dbo].[Heartbeats] WITH(INDEX(CommonQueryIndex))
WHERE [DateEntered] BETWEEN '2011-08-30' and '2011-08-31'
Adding a correctly selective DeviceID clause also hits the 50ms range:
SELECT TOP 1000 * FROM [CIA_WIZ].[dbo].[Heartbeats]
WHERE [DateEntered] BETWEEN '2011-08-30' and '2011-08-31' AND DeviceID = 4;
If I add ORDER BY [DateEntered], [DeviceID] to the original query, I am in the 50ms range:
SELECT TOP 1000 * FROM [CIA_WIZ].[dbo].[Heartbeats]
WHERE [DateEntered] BETWEEN '2011-08-30' and '2011-08-31'
ORDER BY [DateEntered], [DeviceID];
These all use the index I was expecting (CommonQueryIndex) so, I suppose my question is now, is there a way to force this index to be used on queries like this? Or is the size of my table throwing off the optimizer too much and I must just use an ORDER BY or a hint?
Edward Dortland answered (15 upvotes):
Why the the optimiser doesn’t go for your your first index:
CREATE NONCLUSTERED INDEX [CommonQueryIndex] ON [dbo].[Heartbeats]
(
[DateEntered] ASC,
[DeviceID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
Is a matter of selectivity of the [DateEntered] Column.
You told us that your table has 44 million rows. the row size is:
4 bytes, for the ID, 4 bytes for the Device ID, 8 bytes for the date, and 1 byte for the 4 bit columns. that’s 17 bytes + 7 bytes overhead for (tags, Null bitmap, var col offset,,col count) totals 24 Bytes per row.
That would rougly translate to 140k pages. To store those 44 million rows.
Now the optimiser can do two things:
- It could scan the table (clustered index scan)
- Or it could use your index. For every row in your index, it would then need to do a bookmark lookup in the clustered index.
Now at a certain point it just becomes more expensive to do all these single lookups in the clustered index for every index entry found in your non clustered index. The threshold for that is generally the total count of lookups should exceed 25% tot 33% of the total table page count.
So in this case: 140k/25%=35000 rows 140k/33%=46666 rows.
(@RBarryYoung, 35k is 0.08% of the total rows and 46666 is 0.10 %, so I think that is where the confusion was)
So if your where clause will result in somewhere between 35000 and 46666 rows.(this is underneath the top clause!) It’s very likely that your non clustered will not be used and that the clustered index scan will be used.
The only two ways to change this are:
- Make your where clause more selective. (if possible)
- Drop the * and select only a few columns so you can use a covering index.
now sure you could create a covering index even when you use a select *. Hoever that just creates a massive overhead for your inserts/updates/deletes. We would have to know more about your work load (read vs write) to make sure if that’s the best solution.
Changing from datetime to smalldatetime is a 16% reducion in size on clustered index and a 24% reduction in size on your non clustered index.
Originally posted on Database Administrators — 16 upvotes. Licensed under CC BY-SA.
Wednesday, August 8, 2012 by Nate Bross
Someone asked on Game Development:
Having windows phone OS 7.1 and it is for better understanding the game pipeline.
Game.IsFixedTimeStep = true and TargetElapsedTime is 60
The idea was to start a new thread and use Thread.Sleep()
protected override void Update(GameTime gameTime)
{
Sleep();
base.Update(gameTime);
}
bool m_bSleepRunning;
void Sleep()
{
m_bSleepRunning = true;
new System.Threading.Thread(
() =>
{
Thread.Sleep(2000);
m_bSleepRunning =false;
;}
).Start();
while(m_bSleepRunning)
{
//empty cycle
}
}
While debug there is a pause for every update, but the property gameTime.ElapsedGameTime.TotalMilliseconds is not updated.
I posted the following answer, which was chosen as the accepted answer and received 4 upvotes:
By calling Thread.Sleep() inside of a new thread the main thread will not sleep and your Update method is not blocked. To achieve the question you asked give this a shot:
protected override void Update(GameTime gameTime)
{
System.Threading.Thread.Sleep(2000);
base.Update(gameTime);
}
If, on the other hand, your objective is to only update something every so often, try something like this:
float elapsedTime = 0f;
protected override void Update(GameTime gameTime)
{
elapsedTime += gameTime.ElapsedGameTime.TotalMilliseconds;
if(elapsedTime >= 2000)
{
elapsedTime = 0;
// run the code you want to happen every so often here
}
base.Update(gameTime);
}
The first code block will bring your entire game to a grinding halt for two seconds, this may be what you are trying to do. The second code block will keep your game running, but allow you to execute some code only every two seconds.
Notable comments
Nate (0 upvotes): @Artru I don’t understand what you’re trying to do. My first code block does what it looks like your code is trying to do. My second code block allows you to perform a task every so often without blocking the UI. If you are performing an exhaustive task in the UI thread, it will block. Based on your use of threads above and your updated question, it seems that you are trying to synchronize the UI thread with a worker thread, in which case you should do some research and find out how to best do that in your situation. Something like DavidLively’s solution should be a good start.
Originally posted on Game Development — 4 upvotes (accepted answer). Licensed under CC BY-SA.
Thursday, August 2, 2012 by Nate Bross
Someone asked on Stack Overflow:
Any LINQ solutions (preferably) would be appreciated. I need the duplicate values upon concatenating both.
I posted the following answer, which was chosen as the accepted answer and received 1 upvote:
If you want to find out which items are in both lists, you need to use the Enumerable.Intersect() method.
var list1 = new List<KeyValuePair<string,string>>();
var list2 = new List<KeyValuePair<string,string>>();
list1.Add(new KeyValuePair<string,string>("key1", "value1"));
list1.Add(new KeyValuePair<string,string>("key2", "value2"));
list2.Add(new KeyValuePair<string,string>("key1", "value1"));
list2.Add(new KeyValuePair<string,string>("key3", "value3"));
var inBothLists = list1.Intersect(list2); // contains only key1,value1
There are two overloads, one takes an IEqualityComparer<T> so in the event that the default one does not perform the comparison the way you want, you can write and provide your own.
Notable comments
Nate (0 upvotes): @AndreasNiedermair Just revised using new KeyValuePair<string,string>("key#", "value#") and it seems to behave as expected.
Originally posted on Stack Overflow — 1 upvotes (accepted answer). Licensed under CC BY-SA.
Thursday, July 26, 2012 by Nate Bross
Someone asked on Stack Overflow:
I have code, which is using System.Net and System.Net.NetworkInformation references, it generates a list of my network connection names.
Everything seems fine and working, but when I made a class of this code, and exported values to listbox1 items add, I had only one network connection name, but really I have four.
How can I solve this problem?
private void button1_Click(object sender, EventArgs e)
{
Adapters obj = new Adapters();
var value = obj.net_adapters();
listBox1.Items.Add(value);
}
public class Adapters
{
public string net_adapters()
{
string value = string.Empty;
foreach (NetworkInterface nic in NetworkInterface.GetAllNetworkInterfaces())
{
value = nic.Name;
}
return value;
}
}
I posted the following answer, which was chosen as the accepted answer and received 17 upvotes:
I would modify the code you currently have:
public string net_adapters()
{
string value = string.Empty;
foreach (NetworkInterface nic in NetworkInterface.GetAllNetworkInterfaces())
{
// bug in your original code right here is `=`
// you proably meant to do something like value += ", " + nic.Name
// which would not work well with listbox Items collection
value = nic.Name;
}
return value;
}
To be like this:
public System.Collections.Generic.List<String> net_adapters()
{
List<String> values = new List<String>();
foreach (NetworkInterface nic in NetworkInterface.GetAllNetworkInterfaces())
{
values.Add(nic.Name);
}
return values;
}
A more fancy way (although it probably doesn’t matter because GetAllNetworkIntefaces probably blocks until it has has a full list) would be to use IEnumerable<T> and yield return:
public IEnumerable<String> net_adapters()
{
foreach (NetworkInterface nic in NetworkInterface.GetAllNetworkInterfaces())
{
yield return nic.Name;
}
yield break;
}
Either way, you would use it like this:
var obj = new Adapters();
var values = obj.net_adapters();
listBox1.ItemsSource = values;
(On a side note, I would recommend that you use the .NET Framework Naming Guide)
Originally posted on Stack Overflow — 17 upvotes (accepted answer). Licensed under CC BY-SA.