• 0

Quick query regarding pointer to struct and heapalloc in C++...


Question

Got a quick C++ question here, tl;dr version is: Got a struct with some objects, creating a new array of objects, using heapalloc (which I thought would be similar to initialising them all manually, I can access/change VarB and VarC but whatever I try to do with VarA causes exception nightmare)

Long version: I've got a struct;

typedef struct ThreadSocketData
{
	list<SOCKET*> VarA;
	unsigned int VarB;
	unsigned int VarC;
} THREADSOCKETDATA, *PTHREADSOCKETDATA;

I'm allocating it as a global and in a function as;

PTHREADSOCKETDATA *pDataArrays;
...
pDataArrays = new PTHREADSOCKETDATA[2];
...
pDataArrays[i] = (PTHREADSOCKETDATA)HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, sizeof(THREADSOCKETDATA));

Then I try to use it like so;

pDataArrays[1]->VarB = 5;
cout << pDataArrays[1]->VarB << endl; //works fine
SOCKET *ASocket = new SOCKET;
*ASocket = 5;
pDataArrays[0]->VarA.push_back(ASocket); //exception crazy

I've tried created it alternatively and like so and it seems to work fine;

pDataArrays = new PTHREADSOCKETDATA[2];
pDataArrays[0] = new THREADSOCKETDATA;
pDataArrays[1] = new THREADSOCKETDATA;
pDataArrays[0]->VarC = 5;
pDataArrays[0]->VarB = 4;
pDataArrays[0]->VarA.begin();
SOCKET *ASocket = new SOCKET;
*ASocket = 18;
pDataArrays[0]->VarA.push_back(ASocket);

So why does the second way work but the first doesn't fully work? Surely using heapalloc would have the same effect as manually allocating it all?

10 answers to this question

Recommended Posts

  • 0

Operator new calls constructors, malloc/HeapAlloc don't. My guess is that in your first example, VarA is uninitialized because its constructor hasn't run, so you cannot use it.

This would all be moot if you were using C++ idioms rather than mixing C idioms in there. In C++ you use new and delete, not malloc, unless you're implementing an allocator or something. Actually, scratch that, in modern C++ you don't even write new and delete, you use smart pointers and containers (see for instance http://fr.slideshare.net/sermp/without-new-and-delete ). In your example if you replace all the C-style arrays with std::vector, and all the raw pointers with smart pointers, every constructor and destructor is guaranteed to run exactly when it should and you shouldn't be leaking anything or accessing uninitialized state.

Also, as a matter of style you don't need to write typedef struct MyType {} MYTYPE in C++, just struct MYTYPE {}, and you'd probably want to avoid SCREAMINGCAPS for all your type names if you don't want your code to look like 9-year olds having an internet argument.

  • 0

This is my first program with threading (not done it before) so I've been modifying the sample MSDN code which had it all in caps, not my idea of naming but good idea, I'll change that.

I don't think I can use smart pointers for this though? The pDataArray {i} is sent by reference to thread {i} although I want to be able to manipulate it from within the main thread therefore I left the heapalloc code as it was provided by MS, so if I remove all the heapalloc/heapfree and just use normal new/delete and pass that in CreateThread() it will work?

  • 0
  On 28/10/2015 at 21:08, Andre S. said:

Do you really need to use the Win32 function CreateThread rather than standard C++ threads

Well the problem is I'm making this program across 3 different types of PCs, my home PC has visual studio 2015 so std::thread is fine, I already use it to get the number of CPUs, another has visual studio 2013 and I'm not sure if it fully has std::thread or only parts of it? And the other has visual studio 2010, which doesn't have any std::thread support at all :(.

Using std::thread I can then pass it smart pointers or references to objects I manually new/delete and access them from the thread and main program? Also is there a std:: replacement for Createevent/Resetevent/Setevent/Waitformultipleobjects?

  • 0

VS2015 Community Edition is free ;)

Yes you can pass any object to an std::thread, look at the documentation. For a ManualResetEvent or AutoResetEvent you could implement that pretty straightforwardly with std::condition_variable and std::mutex, there are examples online.

 

  • 0
  On 28/10/2015 at 21:24, n_K said:

Well the problem is I'm making this program across 3 different types of PCs, my home PC has visual studio 2015 so std::thread is fine, I already use it to get the number of CPUs, another has visual studio 2013 and I'm not sure if it fully has std::thread or only parts of it? And the other has visual studio 2010, which doesn't have any std::thread support at all :(.

Using std::thread I can then pass it smart pointers or references to objects I manually new/delete and access them from the thread and main program? Also is there a std:: replacement for Createevent/Resetevent/Setevent/Waitformultipleobjects?

Like everyone says - VS2015 freebie - no need to be complicated

But you never said what sort of code you are making. If it's anything connected with Device drivers then the std stuff is out the window...

  • 0

I've got VS 2015 community on this pc which is fine - it's my PC. The other two PCs are not mine, they're in an educational establishment, I'm not administrator so I can't just install VS 2015 on them and why I have to put up with using 2010 and 2013 also.

I'll see if I can get a portable mingw working on the other PCs and see what happens if I chuck the code at that instead of using 2010 and 2013!

Nope not drivers, just testing out threads and winsock and the like. I've not looked into drivers, can MS drivers use multiple threads (even if not using std::thread threads)? Can Linux modules use threads (using std::thread)?

Thanks!

  • 0

It's a bad idea to mix allocation techniques like that unless you know what you're doing and it's for a good reason. In particular, STL data structures (such as list) only get initialised properly if you use the built-in new() operator to allocate them.

You might be able to get around it by calling new() on VarA after allocating the structure with malloc().

static const int ARRAY_SIZE = 20
THREADSOCKETDATA* myArray = malloc(ARRAY_SIZE * sizeof(THREADSOCKETDATA));

for (int i = 0; i < ARRAY_SIZE; i++)
    myArray[i].VarA = new list<SOCKET*>;
    
// clean up    
free(myArray);

I wouldn't recommend it though.

For efficiency purposes, I'd also allocate the entire array as a contiguous block of memory, rather than making multiple heap allocations for each individual structure.

  • 0
  On 29/10/2015 at 15:39, n_K said:

I've got VS 2015 community on this pc which is fine - it's my PC. The other two PCs are not mine, they're in an educational establishment, I'm not administrator so I can't just install VS 2015 on them and why I have to put up with using 2010 and 2013 also.

I'll see if I can get a portable mingw working on the other PCs and see what happens if I chuck the code at that instead of using 2010 and 2013!

Nope not drivers, just testing out threads and winsock and the like. I've not looked into drivers, can MS drivers use multiple threads (even if not using std::thread threads)? Can Linux modules use threads (using std::thread)?

Thanks!

Threads are easy to make. After that it can get interesting. There is a reason why just about every GUI system is limited to single-threaded.

If you are doing a real device driver for hardware then you have interrupts in which you need to store whatever you need within 30 millisecs and then Queue up worker threads to do the real processing which are also time limited so you then can have long running threads to process stuff as a background service. You need to be very aware of which variables are atomic and which data structures can get clobbered by your interrupt code, your DPC code, your worker thread etc. The killer is multi-core because it's so fast. It can look like impossible things happen like only 1/2 of a store to memory etc but then it turns out that compiler operation wasn't atomic...

The solution most people use is locks. After you do some device driver stuff, you learn to strike a more delicate balance. So learn what's atomic and every O/S has O/S API calls for safe atomic updates I think.

This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
  • Posts

    • Microsoft locks Windows 11 user out, shows how easy losing data from forced encryption is by Sayan Sen Back in March earlier this year, a new redesigned Microsoft Account sign-in was released with the intention to make it "more modern, simple, and secure." Microsoft also probably hopes that the revamp will help win some hearts since many dislike the Microsoft Account (MSA) quite a bit as they are forced to use the service during Windows 11 installation. Yes, signing in to the MSA is one of the several system requirements for Windows 11, and it is also the recommended way and it clearly does not like it when users opt for a Local account instead. Microsoft often highlights the benefits of an MSA as it points out the unified access users get across devices and services like Windows, Office, OneDrive, and Xbox, which can help in synchronization of files and settings for convenience. A Microsoft Account also stores the BitLocker encryption key which is crucial thing that all users who have encryption need to store securely. Back in May this year, we covered reports of users losing their data as a consequence of BitLocker key loss, and this is a real danger for many, given that Microsoft now enables automatic BitLocker encryption on Windows 11 24H2, that most users won't even be aware of. So in the case of loss of access to a Microsoft Account, an affected user can suddenly find that they have lost all their data and there may be no way to recover it according to Microsoft's terms. Such account lock-outs can happen as a Reddit user deus03690 found out. The frustrated user claims that Microsoft apparently "randomly" locked their account when they were dealing with multiple data drives. They explain: The user has good reason to be annoyed and frustrated at this, Microsoft's own official guidance about the Account lock says: "If you tried to sign in to your account and received a message that it's been locked, it's because activity associated with your account might violate our Terms of Use." The Terms of Use for MSA explain how Microsoft deals with a closed account. It states: Thus, this shows how users can be pretty much helpless if they get locked out of MSA or lose access to it. It also shows how over-reliance on cloud services on Windows 11, something which LibreOffice recently pointed out, can lead to additional data nightmares like losing all of your data due to forced BitLocker encryption that you may not even be aware of was there in the first place. The solution? Consider keeping your important data backed up locally on internal or external HDDs and SSDs or NAS solution, as only cloud storage is probably not the best decision.
    • I don't know, I haven't checked what changed in previous sockets. I agree that the 1156-1155-1151 succession was suspicious, with a reduction in pin count every time. Intel could do a better job of pre-allocating pins for future use. Another hypothesis is that the internal layout of their CPUs change, like the I/O is moved from one place to another on the chip, and they need to reorganize pins rather than having circuitry go into spaghetti mode to remain compatible. I agree that if AMD is able to maintain compatibility, Intel should be able to do the same, at least by reserving pins for future use and then using those pins when a need for them arises. However, I wouldn't say that AMD's products are entirely better. Intel's I/O now slightly edges out thanks to having double the bandwidth to the chipset and dedicated Thunderbolt lanes to the CPU. It seems that they could widen their lead with the next platform. NVMe SSDs have increased the need for PCIe lanes significantly, and AM5 has been pretty underwhelming in that regard, especially because the chipset connection is so narrow and gets saturated with just 1 gen 4 SSD, leaving the other chipset connectivity (Ethernet, Wi-Fi, audio, etc) to hope for any remaining bandwidth. Otherwise motherboard manufacturers could also make more x2 M.2 slots, those would be fast enough at gen 5 speeds and possibly at gen 4 speeds too.
  • Recent Achievements

    • Week One Done
      korostelev earned a badge
      Week One Done
    • Week One Done
      rozermack875 earned a badge
      Week One Done
    • Week One Done
      oneworldtechnologies earned a badge
      Week One Done
    • Veteran
      matthiew went up a rank
      Veteran
    • Enthusiast
      Motoman26 went up a rank
      Enthusiast
  • Popular Contributors

    1. 1
      +primortal
      675
    2. 2
      ATLien_0
      264
    3. 3
      Michael Scrip
      184
    4. 4
      +FloatingFatMan
      177
    5. 5
      Steven P.
      140
  • Tell a friend

    Love Neowin? Tell a friend!