x86: replace explicit REP_RETs with RETs

From x86inc:
> On AMD cpus <=K10, an ordinary ret is slow if it immediately follows either
> a branch or a branch target. So switch to a 2-byte form of ret in that case.
> We can automatically detect "follows a branch", but not a branch target.
> (SSSE3 is a sufficient condition to know that your cpu doesn't have this problem.)

x86inc can automatically determine whether to use REP_RET rather than
REP in most of these cases, so impact is minimal. Additionally, a few
REP_RETs were used unnecessary, despite the return being nowhere near a
branch.

The only CPUs affected were AMD K10s, made between 2007 and 2011, 16
years ago and 12 years ago, respectively.

In the future, everyone involved with x86inc should consider dropping
REP_RETs altogether.
This commit is contained in:
Lynne
2023-02-01 02:26:20 +01:00
parent fc9a3b584d
commit bbe95f7353
61 changed files with 223 additions and 223 deletions

View File

@@ -85,7 +85,7 @@ pack_2ch_%2_to_%1_u_int %+ SUFFIX:
add lenq, 2*mmsize/(2<<%4)
%endif
jl .next
REP_RET
RET
%endmacro
%macro UNPACK_2CH 5-7
@@ -157,7 +157,7 @@ unpack_2ch_%2_to_%1_u_int %+ SUFFIX:
add lenq, mmsize/(1<<%4)
%endif
jl .next
REP_RET
RET
%endmacro
%macro CONV 5-7
@@ -198,7 +198,7 @@ cglobal %2_to_%1_%3, 3, 3, 6, dst, src, len
emms
RET
%else
REP_RET
RET
%endif
%endmacro
@@ -301,7 +301,7 @@ pack_6ch_%2_to_%1_u_int %+ SUFFIX:
emms
RET
%else
REP_RET
RET
%endif
%endmacro
@@ -375,7 +375,7 @@ unpack_6ch_%2_to_%1_u_int %+ SUFFIX:
add dstq, mmsize
sub lend, mmsize/4
jg .loop
REP_RET
RET
%endmacro
%define PACK_8CH_GPRS (10 * ARCH_X86_64) + ((6 + HAVE_ALIGNED_STACK) * ARCH_X86_32)
@@ -525,7 +525,7 @@ pack_8ch_%2_to_%1_u_int %+ SUFFIX:
%endif
sub lend, mmsize/4
jg .loop
REP_RET
RET
%endmacro
%macro INT16_TO_INT32_N 6

View File

@@ -68,7 +68,7 @@ mix_2_1_float_u_int %+ SUFFIX:
mov%1 [outq + lenq + mmsize], m2
add lenq, mmsize*2
jl .next
REP_RET
RET
%endmacro
%macro MIX1_FLT 1
@@ -100,7 +100,7 @@ mix_1_1_float_u_int %+ SUFFIX:
mov%1 [outq + lenq + mmsize], m1
add lenq, mmsize*2
jl .next
REP_RET
RET
%endmacro
%macro MIX1_INT16 1
@@ -152,7 +152,7 @@ mix_1_1_int16_u_int %+ SUFFIX:
emms
RET
%else
REP_RET
RET
%endif
%endmacro
@@ -218,7 +218,7 @@ mix_2_1_int16_u_int %+ SUFFIX:
emms
RET
%else
REP_RET
RET
%endif
%endmacro