IT

모든 비트를 true로 설정하기 위해 -1을 사용하는 것이 안전합니까?

lottoking 2020. 7. 1. 07:40
반응형

모든 비트를 true로 설정하기 위해 -1을 사용하는 것이 안전합니까?


이 패턴이 C & C ++에서 많이 사용되는 것을 보았습니다.

unsigned int flags = -1;  // all bits are true

이것이 이것을 달성하기에 좋은 휴대용 방법입니까? 또는 사용 0xffffffff또는 ~0더 나은?


가장 간단한 방법이므로 표시 한대로 정확하게 수행하는 것이 좋습니다. 실제 부호 표시와 상관없이 항상-1 작동하는 초기화를 수행 하는 동안 올바른 피연산자 유형을 가져야하기 때문에 때때로 놀라운 동작이 발생합니다. 그래야만 유형의 가치가 가장 높아집니다 .~unsigned

가능한 놀람의 예를 보려면 다음을 고려하십시오.

unsigned long a = ~0u;

모든 비트가 1 인 패턴을 반드시 저장할 필요는 없습니다 a. 그러나 먼저의 비트 1이 모두 포함 된 패턴을 unsigned int만든 다음에 할당합니다 a. unsigned long더 많은 비트가 있을 때 발생 하는 것은 모든 비트가 1이 아니라는 것입니다.

그리고이 것을 고려하십시오 .2가 아닌 보수 표현에서는 실패합니다.

unsigned int a = ~0; // Should have done ~0u !

그 이유는 ~0모든 비트를 뒤집어 야하기 때문입니다 . 반전 -1하면 2의 보수 기계 (필요한 값)에서 산출 되지만 다른 표현 에서는 산출 되지 않습니다-1 . 보완 기계에서는 0을 산출합니다. 따라서 사람의 보완 시스템에서 위의 값은 a0으로 초기화 됩니다.

이해해야 할 것은 비트가 아니라 값에 관한 것입니다. 변수는 값으로 초기화됩니다 . 이니셜 라이저에서 초기화에 사용 된 변수의 비트를 수정하면 해당 비트에 따라 값이 생성됩니다. a가능한 가장 높은 값 으로 초기화 하기 위해 필요한 값은 -1또는 UINT_MAX입니다. 두 번째는 유형에 따라 달라집니다.에 a사용해야 ULONG_MAX합니다 unsigned long. 그러나 첫 번째는 유형에 의존하지 않으며 가장 높은 가치를 얻는 좋은 방법입니다.

우리 는 모든 비트가 하나 인지 (항상 그렇지 않음)에 대해 이야기 -1하지 않습니다. 그리고 우리는 물론 모든 비트가 1 비트 인지 여부에 대해 이야기 하지 않습니다~0 .

그러나 우리가 이야기하고있는 것은 초기화 된 flags변수 의 결과입니다 . 그리고 그것을 위해, 모든 유형과 기계 에서만-1 작동합니다.


  • unsigned int flags = -1; 휴대용입니다.
  • unsigned int flags = ~0; 그것은 2의 보수 표현에 의존하기 때문에 이식성이 없습니다.
  • unsigned int flags = 0xffffffff; 32 비트 정수를 가정하기 때문에 이식성이 없습니다.

C 표준에서 보장하는 방식으로 모든 비트를 설정하려면 첫 번째 비트를 사용하십시오.


솔직히 나는 모든 fff가 더 읽기 쉽다고 생각합니다. 그 반 패턴에 대한 의견에 관해서는, 모든 비트가 설정되고 지워지는 것을 정말로 염려한다면, 아마도 당신이 어쨌든 변수의 크기에 관심이있는 상황에 있다고 주장 할 것입니다. :: uint16_t 등


언급 된 문제를 피하는 방법은 간단하게 수행하는 것입니다.

unsigned int flags = 0;
flags = ~flags;

휴대가 편리합니다.


플래그에 부호없는 int를 사용하는 것이 확실하지 않다는 것은 C ++의 첫 번째 장소입니다. 비트 셋 등은 어떻습니까?

std::numeric_limit<unsigned int>::max()0xffffffff부호없는 int가 32 비트 정수라고 가정 하기 때문에 더 좋습니다 .


unsigned int flags = -1;  // all bits are true

"이것이 이것을 달성하는 좋은 [,] 휴대용 방법입니까?"

가지고 다닐 수 있는? .

좋은? 이 글타래에 표시된 모든 혼란에 의해 입증되는 논쟁의 여지가 있습니다. 동료 프로그래머가 혼동없이 코드를 이해할 수있을 정도로 명확하다는 것은 좋은 코드를 측정하는 차원 중 하나 여야합니다.

또한이 방법은 컴파일러 경고 가 발생하기 쉽습니다 . 컴파일러를 방해하지 않고 경고를 없애려면 명시 적 캐스트가 필요합니다. 예를 들어

unsigned int flags = static_cast<unsigned int>(-1);

명시 적 캐스트에서는 대상 유형에주의를 기울여야합니다. 대상 유형에주의를 기울이면 다른 접근 방식의 함정을 자연스럽게 피할 수 있습니다.

내 충고는 타겟 유형에주의를 기울이고 암시 적 변환이 없는지 확인하는 것입니다. 예를 들면 다음과 같습니다.

unsigned int flags1 = UINT_MAX;
unsigned int flags2 = ~static_cast<unsigned int>(0);
unsigned long flags3 = ULONG_MAX;
unsigned long flags4 = ~static_cast<unsigned long>(0);

모두 동료 프로그래머에게 정확하고 분명 합니다.

그리고 C ++ 11 을 사용 auto하면 다음과 같이 훨씬 간단하게 만들 수 있습니다 .

auto flags1 = UINT_MAX;
auto flags2 = ~static_cast<unsigned int>(0);
auto flags3 = ULONG_MAX;
auto flags4 = ~static_cast<unsigned long>(0);

나는 단순히 올바른 것보다 정확하고 명백한 것으로 생각합니다.


Converting -1 into any unsigned type is guaranteed by the standard to result in all-ones. Use of ~0U is generally bad since 0 has type unsigned int and will not fill all the bits of a larger unsigned type, unless you explicitly write something like ~0ULL. On sane systems, ~0 should be identical to -1, but since the standard allows ones-complement and sign/magnitude representations, strictly speaking it's not portable.

Of course it's always okay to write out 0xffffffff if you know you need exactly 32 bits, but -1 has the advantage that it will work in any context even when you do not know the size of the type, such as macros that work on multiple types, or if the size of the type varies by implementation. If you do know the type, another safe way to get all-ones is the limit macros UINT_MAX, ULONG_MAX, ULLONG_MAX, etc.

Personally I always use -1. It always works and you don't have to think about it.


As long as you have #include <limits.h> as one of your includes, you should just use

unsigned int flags = UINT_MAX;

If you want a long's worth of bits, you could use

unsigned long flags = ULONG_MAX;

These values are guaranteed to have all the value bits of the result set to 1, regardless of how signed integers are implemented.


Yes. As mentioned in other answers, -1 is the most portable; however, it is not very semantic and triggers compiler warnings.

To solve these issues, try this simple helper:

static const struct All1s
{
    template<typename UnsignedType>
    inline operator UnsignedType(void) const
    {
        static_assert(std::is_unsigned<UnsignedType>::value, "This is designed only for unsigned types");
        return static_cast<UnsignedType>(-1);
    }
} ALL_BITS_TRUE;

Usage:

unsigned a = ALL_BITS_TRUE;
uint8_t  b = ALL_BITS_TRUE;
uint16_t c = ALL_BITS_TRUE;
uint32_t d = ALL_BITS_TRUE;
uint64_t e = ALL_BITS_TRUE;

I would not do the -1 thing. It's rather non-intuitive (to me at least). Assigning signed data to an unsigned variable just seems to be a violation of the natural order of things.

In your situation, I always use 0xFFFF. (Use the right number of Fs for the variable size of course.)

[BTW, I very rarely see the -1 trick done in real-world code.]

Additionally, if you really care about the individual bits in a vairable, it would be good idea to start using the fixed-width uint8_t, uint16_t, uint32_t types.


On Intel's IA-32 processors it is OK to write 0xFFFFFFFF to a 64-bit register and get the expected results. This is because IA32e (the 64-bit extension to IA32) only supports 32-bit immediates. In 64-bit instructions 32-bit immediates are sign-extended to 64-bits.

The following is illegal:

mov rax, 0ffffffffffffffffh

The following puts 64 1s in RAX:

mov rax, 0ffffffffh

Just for completeness, the following puts 32 1s in the lower part of RAX (aka EAX):

mov eax, 0ffffffffh

And in fact I've had programs fail when I wanted to write 0xffffffff to a 64-bit variable and I got a 0xffffffffffffffff instead. In C this would be:

uint64_t x;
x = UINT64_C(0xffffffff)
printf("x is %"PRIx64"\n", x);

the result is:

x is 0xffffffffffffffff

I thought to post this as a comment to all the answers that said that 0xFFFFFFFF assumes 32 bits, but so many people answered it I figured I'd add it as a separate answer.


See litb's answer for a very clear explanation of the issues.

My disagreement is that, very strictly speaking, there are no guarantees for either case. I don't know of any architecture that does not represent an unsigned value of 'one less than two to the power of the number of bits' as all bits set, but here is what the Standard actually says (3.9.1/7 plus note 44):

The representations of integral types shall define values by use of a pure binary numeration system. [Note 44:]A positional representation for integers that uses the binary digits 0 and 1, in which the values represented by successive bits are additive, begin with 1, and are multiplied by successive integral power of 2, except perhaps for the bit with the highest position.

That leaves the possibility for one of the bits to be anything at all.


Although the 0xFFFF (or 0xFFFFFFFF, etc.) may be easier to read, it can break portability in code which would otherwise be portable. Consider, for example, a library routine to count how many items in a data structure have certain bits set (the exact bits being specified by the caller). The routine may be totally agnostic as to what the bits represent, but still need to have an "all bits set" constant. In such a case, -1 will be vastly better than a hex constant since it will work with any bit size.

The other possibility, if a typedef value is used for the bitmask, would be to use ~(bitMaskType)0; if bitmask happens to only be a 16-bit type, that expression will only have 16 bits set (even if 'int' would otherwise be 32 bits) but since 16 bits will be all that are required, things should be fine provided that one actually uses the appropriate type in the typecast.

Incidentally, expressions of the form longvar &= ~[hex_constant] have a nasty gotcha if the hex constant is too large to fit in an int, but will fit in an unsigned int. If an int is 16 bits, then longvar &= ~0x4000; or longvar &= ~0x10000; will clear one bit of longvar, but longvar &= ~0x8000; will clear out bit 15 and all bits above that. Values which fit in int will have the complement operator applied to a type int, but the result will be sign extended to long, setting the upper bits. Values which are too big for unsigned int will have the complement operator applied to type long. Values which are between those sizes, however, will apply the complement operator to type unsigned int, which will then be converted to type long without sign extension.


Practically: Yes

Theoretically: No.

-1 = 0xFFFFFFFF (or whatever size an int is on your platform) is only true with two's complement arithmetic. In practice, it will work, but there are legacy machines out there (IBM mainframes, etc.) where you've got an actual sign bit rather than a two's complement representation. Your proposed ~0 solution should work everywhere.


As others have mentioned, -1 is the correct way to create an integer that will convert to an unsigned type with all bits set to 1. However, the most important thing in C++ is using correct types. Therefore, the correct answer to your problem (which includes the answer to the question you asked) is this:

std::bitset<32> const flags(-1);

This will always contain the exact amount of bits you need. It constructs a std::bitset with all bits set to 1 for the same reasons mentioned in other answers.


It is certainly safe, as -1 will always have all available bits set, but I like ~0 better. -1 just doesn't make much sense for an unsigned int. 0xFF... is not good because it depends on the width of the type.


I say:

int x;
memset(&x, 0xFF, sizeof(int));

This will always give you the desired result.


Leveraging on the fact that assigning all bits to one for an unsigned type is equivalent to taking the maximum possible value for the given type,
and extending the scope of the question to all unsigned integer types:

Assigning -1 works for any unsigned integer type (unsigned int, uint8_t, uint16_t, etc.) for both C and C++.

As an alternative, for C++, you can either:

  1. Include <limits> and use std::numeric_limits< your_type >::max()
  2. Write a custom templated function (This would also allow some sanity check, i.e. if the destination type is really an unsigned type)

The purpose could be add more clarity, as assigning -1 would always need some explanatory comment.


A way to make the meaning bit more obvious and yet to avoid repeating the type:

const auto flags = static_cast<unsigned int>(-1);

yes the representation shown is very much correct as if we do it the other way round u will require an operator to reverse all the bits but in this case the logic is quite straightforward if we consider the size of the integers in the machine

for instance in most machines an integer is 2 bytes = 16 bits maximum value it can hold is 2^16-1=65535 2^16=65536

0%65536=0 -1%65536=65535 which corressponds to 1111.............1 and all the bits are set to 1 (if we consider residue classes mod 65536) hence it is much straight forward.

I guess

no if u consider this notion it is perfectly dine for unsigned ints and it actually works out

just check the following program fragment

int main() {

unsigned int a=2;

cout<<(unsigned int)pow(double(a),double(sizeof(a)*8));

unsigned int b=-1;

cout<<"\n"<<b;

getchar();

return 0;

}

answer for b = 4294967295 whcih is -1%2^32 on 4 byte integers

hence it is perfectly valid for unsigned integers

in case of any discrepancies plzz report

참고URL : https://stackoverflow.com/questions/809227/is-it-safe-to-use-1-to-set-all-bits-to-true

반응형