# Converting a character array of packed decimal to integer in C program

I need to write a C program to convert packed decimal field in a buffer to in integer value. The precision of packed decimal is 9 and Scale is 0. What is the best way to convert this in a IBM mainframe C progrram? In Cobol the format for Packed Decimal used is Comp-3 Any help is aprreciated.

If you are running the program on a Zos mainframe then the C compiler supports packed decimal natively.

Google for "Zos C fixed point decimal type" should get you the right manual page its a simple as :

```#include <decimal.h>
decimal(9,0) mynum;
```

```The one way I think it can be done, is
long long MyGetPackedDecimalValue(char* pdIn, int length)
{
// Convert packed decimal to long
const  int PlusSign = 0x0C;       // Plus sign
const int MinusSign = 0x0D;      // Minus   `enter code here`
const int NoSign = 0x0F;         // Unsigned
const int DropHO = 0xFF;         // AND mask to drop HO sign bits
const int GetLO  = 0x0F;         // Get only LO digit
long long val = 0;                    // Value to return

printf ("in side ****GetPDVal \n ");
for(int i=0; i < length; i++)
{
int aByte = pdIn[i] & DropHO; // Get next 2 digits & drop sign bits
if(i == length - 1)
{    // last digit?
int digit = aByte >> 4;    // First get digit
val = val*10 + digit;
printf("digit= %d, val= %lld \n",
digit,
val);
int sign = aByte & GetLO;  // now get sign
if (sign == MinusSign)
{
val = -val;
}
else
{
// Do we care if there is an invalid sign?
if(sign != PlusSign && sign != NoSign)
perror("SSN:Invalid Sign nibble in Packed Decimal\n");
}
}
else
{
int digit = aByte >> 4;    // HO first
val = val*10 + digit;
printf("digit= %d, val= %lld \n",
digit,
val);
digit = aByte & GetLO;      // now LO
val = val*10 + digit;
printf("digit= %d, val= %lld \n",
digit,
val);
}
}`enter code here`
printf ("**** coming out GetPDVal \n ");
return val;
}
```