Sign in to follow this  
Followers 0
Lord Method Man

String += appending ASCII value instead of char

11 posts in this topic

I've been experimenting with visual C++ having worked in Java for a while. Right now I'm working with a little String manipulation.

I have a Windows Form with a textBox that displays the value of a String. I'm trying to append a char to the String. The problem I'm having is the ASCII value of the String is appended instead of the char itself. So far I haven't been able to find a solution through the resources I've read. Here is an example:

String^ mystring;

char y = 'd';

mystring = "Test";

mystring += y;

textBox->Text = mystring;

The output is "Test100" instead of "Testd" (100 being the ASCII value of d). What should I be doing differently?

Share this post


Link to post
Share on other sites

Maybe try using a string for "y" instead of a char?

Share this post


Link to post
Share on other sites

I think you will need to do something like this:

mystring += "" + y;

I don't have a compiler in front of me, but you could also try casting y to a string e.g.:

mystring += (string) y;

EDIT: Whoops, just realised this is Visual C++, not C#. Not sure if either of these things will work.

Share this post


Link to post
Share on other sites

It may be that the reason you're seeing the decimal value instead of the ASCII character is because char is a decimal type. It is a decimal type just large enough to hold an ASCII character: 8-bits. Try referencing it as a pointer instead of a single character to make it a C-style string.

1 person likes this

Share this post


Link to post
Share on other sites

Java programmer myself so walking blindly into a room full of knives here, but is there a method like the one in the String class that you can use to append two strings? Then you just make the char into a string.

Share this post


Link to post
Share on other sites

Is 'String' a Microsoft-centric data type or did you mistype the name of the standard 'string' class? The string class in the standard library can append single characters, C-style strings, and standard strings using the '+=' operator.

Without knowing the data type of 'textBox->Text', I'm assuming that something like the following will work:


std::string mystring;
char y = 'd';

mystring = "Test";
mystring += y;
textBox->Text = mystring.c_str();
[/CODE]

Alternatively, you could forgo the niceness of the string class and use only C-style strings. Your example would then look something like this:

[CODE]
char mystring[50];
char y = 'd';

strcpy( mystring, "Test" );
strncat( mystring, &y, 1 );
textBox->Text = mystring;
[/CODE]

1 person likes this

Share this post


Link to post
Share on other sites

String^ mystring;
char y = 'd';
mystring = "Test";
mystring += y;
textBox->Text = mystring;[/CODE]

This is C++/CLI. The caret "^" on a type name is not C++ syntax, it denotes a managed type, here System::String. If you want to learn C++, make sure you create an empty C++ project, not a "CLR" project. You won't be able to work directly with Winforms with C++; if working with Winforms is what you want, learn C# instead. I strongly doubt you want to learn C++/CLI, it is more complicated than you can imagine and it doesn't serve much purpose besides building bridges between the native and the managed world.

Anyway, with System::String you can't append a char to a string using the += operator, however you could call ToString() on the character and append that isntead, i.e.

[CODE]mystring += y->ToString();[/CODE]

But really, run from C++/CLI while there is still time. File -> New -> Project -> Visual C++ -> General -> Empty Project. Now you're doing real, ISO C++. It's complicated enough by itself.

5 people like this

Share this post


Link to post
Share on other sites

.NET uses unicode characters. Use wchar_t instead of char.

wchar_t y = L'd';[/CODE]

It most likely chooses int (System::Int32) as the closest conversion for char as a result, resulting in the decimal number.

If you really want to use char throughout your code, then you can do something similar to Dr_Asik's suggestion by using Convert::ToString:

[CODE]char y = 'd';
String ^value = Convert::ToString(y);[/CODE]

This link explains the mapping of types to the CLI: http://www.c-sharpcorner.com/uploadfile/b942f9/cppcli-for-the-C-Sharp-programmer/ , which is why it maps to a number rather than a character (it does not widen to the number, rather it widens to an integer).

1 person likes this

Share this post


Link to post
Share on other sites

Thanks for the tips, Ill try them tomorrow. I'm working on a random password generator. It's something I try to make with each programming language I work with.

Share this post


Link to post
Share on other sites

Scrapped everything and redid the project in C#. Holy crap was it easier. I was thinking I could do it in C++ since that's what I used in college (command line progs only). That Visual C++/CLI stuff sucked. Knowing Java I was pretty much able to code in C# without having to look up much of anything. Came up with this little Class for creating a password:


class SecurePassword
{
private String password;
private int length;
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
Random random = new Random();
public void generatePassword(bool useUpper, bool useNumber)
{
char ch;
int i;
int type;
password = "";
for (i = 0; i < this.length; i++)
{
type = (random.Next(0, 4));
if (type == 0 && useUpper)
ch = getRandomUpperChar();
else if (type == 1 && useNumber)
ch = getRandomDigit();
else
ch = getRandomLowerChar();

password += ch;
}
}
public char getRandomDigit()
{
byte[] byteCh = new byte[4];
double range;
uint intCh;
rng.GetBytes(byteCh);
intCh = BitConverter.ToUInt32(byteCh, 0);
range = intCh / 4294967296.0;
intCh = (uint)(range * 10);
return Convert.ToChar(intCh + 48);
}
public char getRandomLowerChar()
{
byte[] byteCh = new byte[4];
double range;
uint intCh;
rng.GetBytes(byteCh);
intCh = BitConverter.ToUInt32(byteCh, 0);
range = intCh / 4294967296.0;
intCh = (uint)(range * 26);
return Convert.ToChar(intCh + 97);
}
public char getRandomUpperChar()
{
byte[] byteCh = new byte[4];
double range;
uint intCh;
rng.GetBytes(byteCh);
intCh = BitConverter.ToUInt32(byteCh, 0);
range = intCh / 4294967296.0;
intCh = (uint)(range * 26);
return Convert.ToChar(intCh + 65);
}
public String getPassword()
{
return password;
}
public void setLength(int len)
{
length = len;
}
}
[/CODE]

Had a little hiccup converting the byte array to a character in the range I wanted, Im sure my solution is a bit sloppy. I wanted to use the secure random methods instead of just Random.Next(min, max).

Share this post


Link to post
Share on other sites

Had a little hiccup converting the byte array to a character in the range I wanted, Im sure my solution is a bit sloppy. I wanted to use the secure random methods instead of just Random.Next(min, max).

Note: you're still using Random.Next(...) in your main loop.

Also, you could probably simplify the code a lot with your bytes-to-number conversion being extracted into a separate function.

private char getCharacter(uint start, uint range)
{
    byte[] bytes = new byte[1];

    // realistically, given the expected ranges (not even a full byte), you could use a single byte
    rng.GetBytes(bytes);

    return Convert.ToChar(start + bytes[0] % range);
}
[/CODE]

Any random lowercase character: [code]ch = getCharacter((uint)'a', 26);

Any random uppercase character:

ch = getCharacter((uint)'A', 26);

Any random number character:

ch = getCharacter((uint)'0', 10);

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Recently Browsing   0 members

    No registered users viewing this page.