On Mon, Jul 10, 2017 at 3:41 PM, Tom Lendacky <thomas.lendacky@xxxxxxx> wrote:
On 7/8/2017 7:50 AM, Brian Gerst wrote:
On Fri, Jul 7, 2017 at 9:38 AM, Tom Lendacky <thomas.lendacky@xxxxxxx>
wrote:
Update the CPU features to include identifying and reporting on the
Secure Memory Encryption (SME) feature. SME is identified by CPUID
0x8000001f, but requires BIOS support to enable it (set bit 23 of
MSR_K8_SYSCFG). Only show the SME feature as available if reported by
CPUID and enabled by BIOS.
Reviewed-by: Borislav Petkov <bp@xxxxxxx>
Signed-off-by: Tom Lendacky <thomas.lendacky@xxxxxxx>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 2 ++
arch/x86/kernel/cpu/amd.c | 13 +++++++++++++
arch/x86/kernel/cpu/scattered.c | 1 +
4 files changed, 17 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h
b/arch/x86/include/asm/cpufeatures.h
index 2701e5f..2b692df 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -196,6 +196,7 @@
#define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */
#define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD
ProcFeedbackInterface */
+#define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory
Encryption */
Given that this feature is available only in long mode, this should be
added to disabled-features.h as disabled for 32-bit builds.
I can add that. If the series needs a re-spin then I'll include this
change in the series, otherwise I can send a follow-on patch to handle
the feature for 32-bit builds if that works.
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory
Number */
#define X86_FEATURE_INTEL_PT ( 7*32+15) /* Intel Processor Trace */
diff --git a/arch/x86/include/asm/msr-index.h
b/arch/x86/include/asm/msr-index.h
index 18b1623..460ac01 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -352,6 +352,8 @@
#define MSR_K8_TOP_MEM1 0xc001001a
#define MSR_K8_TOP_MEM2 0xc001001d
#define MSR_K8_SYSCFG 0xc0010010
+#define MSR_K8_SYSCFG_MEM_ENCRYPT_BIT 23
+#define MSR_K8_SYSCFG_MEM_ENCRYPT
BIT_ULL(MSR_K8_SYSCFG_MEM_ENCRYPT_BIT)
#define MSR_K8_INT_PENDING_MSG 0xc0010055
/* C1E active bits in int pending message */
#define K8_INTP_C1E_ACTIVE_MASK 0x18000000
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index bb5abe8..c47ceee 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -611,6 +611,19 @@ static void early_init_amd(struct cpuinfo_x86 *c)
*/
if (cpu_has_amd_erratum(c, amd_erratum_400))
set_cpu_bug(c, X86_BUG_AMD_E400);
+
+ /*
+ * BIOS support is required for SME. If BIOS has not enabled SME
+ * then don't advertise the feature (set in scattered.c)
+ */
+ if (cpu_has(c, X86_FEATURE_SME)) {
+ u64 msr;
+
+ /* Check if SME is enabled */
+ rdmsrl(MSR_K8_SYSCFG, msr);
+ if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT))
+ clear_cpu_cap(c, X86_FEATURE_SME);
+ }
This should be conditional on CONFIG_X86_64.
If I make the scattered feature support conditional on CONFIG_X86_64
(based on comment below) then cpu_has() will always be false unless
CONFIG_X86_64 is enabled. So this won't need to be wrapped by the
#ifdef.
If you change it to use cpu_feature_enabled(), gcc will see that it is
disabled and eliminate the dead code at compile time.
}
static void init_amd_k8(struct cpuinfo_x86 *c)
diff --git a/arch/x86/kernel/cpu/scattered.c
b/arch/x86/kernel/cpu/scattered.c
index 23c2350..05459ad 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -31,6 +31,7 @@ struct cpuid_bit {
{ X86_FEATURE_HW_PSTATE, CPUID_EDX, 7, 0x80000007, 0 },
{ X86_FEATURE_CPB, CPUID_EDX, 9, 0x80000007, 0 },
{ X86_FEATURE_PROC_FEEDBACK, CPUID_EDX, 11, 0x80000007, 0 },
+ { X86_FEATURE_SME, CPUID_EAX, 0, 0x8000001f, 0 },
This should also be conditional. We don't want to set this feature on
32-bit, even if the processor has support.
Can do. See comment above about re-spin vs. follow-on patch.
Thanks,
Tom
A followup patch will be OK if there is no code that will get confused
by the SME bit being present but not active.
--
Brian Gerst