Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
Examiner would like to note that applicant has specifically defined “vintage” to be limited as follows throughout prosecution
Specification par 12 ”As used herein "vintage" refers to descriptive information identifying a component of a computing system, such as one or more of wafer material, manufacturing process node, manufacturing process version information, manufacturing bin, manufacturing lot, date of manufacture, time of manufacture, component type, product type, product version, date code of manufacturing, information regarding whether any processing cores are de-featured or all processing cores are enabled, etc. Components of the same vintage have common characteristics and may have similar DPM rates.”
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) mental processes – concepts performed in the human mind.
Subject Matter Eligibility Analysis
Step 1: Do the Claims Specify a Statutory Category?
Claims 1-5 recite an apparatus, 6-17 recite a method, and claims 18-22 recite a non-transitory machine-readable storage comprising instructions, therefore satisfying Step 1 of the analysis.
Step 2 Analysis
Regarding claim 1,
Step 2A – Prong 1: Is a Judicial Exception Recited?
For step 2A eligibility prong one(does the claim recite a judicial exception?), the claim(s) recite(s) “detect a scan interrupt”(this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]), “get …based at least in part on the hashed id” (this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]), “initiate … based at least in part on the material vintage information to detect any defects in the component” (this is a mental process of observation, evaluation, judgment, opinion [MPEP 2106.04(a)(2) III. “mental processes”]). As claimed, this process can practically be performed either in the human mind or using a computer as a tool.
Even if the limitations require a computer, it can still be a mental process [see MPEP 2106.04(a)(2) III. C. "A Claim That Requires a Computer May Still Recite a Mental Process"]. Detecting a scan interrupt, determining which material vintage information of the component to get based on the hashed id, and initiating a process based at least in part on the material vintage information to detect any defects in the component are directed to mental processes of observations, evaluations, judgments, and opinions, because the steps are recited at a high level of generality and merely use computers as a tool to perform the processes.
Step 2A – Prong 2: Is the Judicial Exception Integrated into a Practical Application?
For step 2A eligibility prong two(does the claim recite additional elements that integrate the judicial exception into a practical application?), This judicial exception is not integrated into a practical application because the additional limitations of “store instructions and a plurality of test patterns;”, “store a hashed identifier (ID) of a component of a computing system and an indication of success or failure of a validation operation”, “read the scan register”, and “get material vintage information of the component”, are insignificant extra-solution activities of data gathering, data sending, and storage[see MPEP 2106.05(g) Whether the limitation amounts to necessary data gathering and outputting. This is considered in Step 2A Prong Two and Step 2B.]
The additional computer parts(processor, memory, scan register, component of a computing system) are generic components recited at a high level of generality[see MPEP 2106.05(b) “If applicant amends a claim to add a generic computer or generic computer components and asserts that the claim recites significantly more because the generic computer is 'specially programmed' (as in Alappat, now considered superseded) or is a 'particular machine' (as in Bilski), the examiner should look at whether the added elements integrate the exception into a practical application or provide significantly more than the judicial exception. Merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 223-24, 110 USPQ2d 1976, 1983-84 (2014). See In re Alappat, 33 F.3d 1526, 1545, 31 USPQ2d 1545, 1558 (Fed. Cir. 1994); In re Bilski, 545 F.3d 943, 88 USPQ2d 1385 (Fed. Cir. 2008)”]. As a whole, the claims are directed to several abstract mental processes implemented on a generic computer, but are not integrated into a practical application[see MPEP 2106.05(f) “implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two”].
The claim’s “scan of the component” do not integrate the judicial exception into a practical application. The limitations are specified at a high level of generality, and does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. The claims generally link the abstract idea to the field of scanning a component of a computing system. As presently claimed, both the component and the computing system are generic. The same process except for the descriptors would also work for scanning components of a pacemaker, scanning components of a car, scanning components of a distributed cloud computing cluster system, scanning components of a single system-on-chip, scanning devices in an internet of things system, scanning security/temperature/elevator sensors in a building management system). [See MPEP 2106.04(d)(1) “Evaluating Improvements in the Functioning of a Computer, or an Improvement to Any Other Technology or Technical Field in Step 2A Prong Two” and also MPEP 2106.05(h) “Field of Use and Technological Environment”]
Step 2B: Do the Claims Provide an Inventive Concept?
For step 2B eligibility (Whether a Claim Amounts to Significantly More), The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because additional elements are either gathering/storing data(“store instructions and a plurality of test patterns;”, “store a hashed identifier (ID) of a component of a computing system and an indication of success or failure of a validation operation”, “read the scan register”, and “get material vintage information of the component”), are additional generic computer parts that are well known components recited at a high level of generality(processor, memory, scan register, “component of a computing system”), or do not go beyond generally linking the judicial exception to a particular environment or field of use(“scan of the component”).
The data gathering/storing limitations are insignificant extra-solution activity because these limitations amount to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output) [see MPEP 2106.05(g) “(1) Whether the extra-solution limitation is well known. “, “(2) Whether the limitation is significant (i.e. it imposes meaningful limits on the claim such that it is not nominally or tangentially related to the invention).”, “(3) Whether the limitation amounts to necessary data gathering and outputting, (i.e., all uses of the recited judicial exception require such data gathering or data output).”]
The additional computer parts(processor, memory, scan register, component of a computing system) are generic components recited at a high level of generality[see MPEP 2106.05(b) “If applicant amends a claim to add a generic computer or generic computer components and asserts that the claim recites significantly more because the generic computer is 'specially programmed' (as in Alappat, now considered superseded) or is a 'particular machine' (as in Bilski), the examiner should look at whether the added elements integrate the exception into a practical application or provide significantly more than the judicial exception. Merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 223-24, 110 USPQ2d 1976, 1983-84 (2014). See In re Alappat, 33 F.3d 1526, 1545, 31 USPQ2d 1545, 1558 (Fed. Cir. 1994); In re Bilski, 545 F.3d 943, 88 USPQ2d 1385 (Fed. Cir. 2008)”]. As a whole, the claims are directed to several abstract mental processes implemented on a generic computer, but are not integrated into a practical application[see MPEP 2106.05(f) “implementing an abstract idea on a generic computer, does not integrate the abstract idea into a practical application in Step 2A Prong Two”].
The claim’s “scan of the component” does not amount to significantly more than the judicial exception. The limitations are specified at a high level of generality, and does not meaningfully limit the claim by going beyond generally linking the use of the judicial exception to a particular technological environment. The claims generally link the abstract idea to the field of scanning a component of a computing system. As presently claimed, both the component and the computing system are generic. The same process except for the descriptors would also work for scanning components of a pacemaker, scanning components of a car, scanning components of a distributed cloud computing cluster system, scanning components of a single system-on-chip, scanning devices in an internet of things system, scanning security/temperature/elevator sensors in a building management system). [See MPEP 2106.05(h) “Field of Use and Technological Environment”]
Combined and considered as a whole, the claim describes a system that detects a scan interrupt(a flag to stop normal processing and start a scan), reads the scan register for a hashed ID identifying the component, retrieves more descriptive information about the component by looking up the hashed ID from a database, and initiating a scan of the component based on the looked up information. The claim as a whole takes the judicial exception and only adds data gathering/storing steps[MPEP 2106.05(g)], generic computer parts [MPEP 2106.05(b)], other limitations that do not go beyond generally linking the judicial exception to a particular environment or field of use [MPEP 2106.05(h)], and do not amount to significantly more than the judicial exception itself. [see MPEP 2106.05].
Conclusion: In light of the above, the limitations in claim 1 recite and are directed to an abstract idea and recite no additional elements that would amount to significantly more than the identified abstract idea. Claim 1 is therefore not patent eligible.
Regarding claims 6 and 18, they are the method and machine readable storage medium comprising instructions that the system of claim 1 implements and are rejected for the same reasons.
As for the limitations recited in claims 2-5,7-17,19-22, when considering each of the claims as a whole these additional elements do not integrate the exception into a practical application, using one or more of the considerations laid out by the Supreme Court and the Federal Circuit. The additional elements do not reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field. The additional elements do not implement a judicial exception with, or use a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim. The additional element do not apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3,5-9,14-15,17-20,22 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230015477 A1 (Verburg) in view of US 20170249229 A1 (Walton), and US 20190033367 A1 (Pappu).
Regarding claim 1, Verburg teaches,
An apparatus comprising:
a memory to store instructions(fig 7:703; par 58 “The processors 701, also referred to as processing circuits, are coupled via a system bus 702 to a system memory 703 and various other components.”) and a plurality of test patterns(fig 1:104; par 19 “The system 100 also includes a test case database 104 that includes a variety of test cases that can be selected by the test controller 102 for execution on the system under test 120.”); and
a processor(fig 7:701; par 58 “As shown in FIG. 7, the computer system 700 has one or more central processing units (CPU(s)) 701a, 701b, 701c, etc. ( collectively or generically referred to as processor(s) 701).”), including a scan register to store identifying information(ID) of a component of a computing system(par 22 “The test controller 102 can obtain component data for each of these specific components to determine what test cases to run. This component data can include manufacturer vintage information for parts, failure rates, failure modes as wells as the manufacturing history for similar parts including test times, number of failures in the test steps and/or include development test results for each component.”; This shows that the components are identifiable. par 27 “The learning test case 302 can be selected based on the system configuration 101 of the system under test and the component data 306 for the system, using information from the test case database 104 and the knowledge base 140, also referenced in FIG. 1, to select tests that may be likely to detect any potential defects in the SUT.” This is the scan register storing component identifying information.), coupled to the memory(fig 7:703; par 58 “The processors 701, …, are coupled … to a system memory 703 ….”), to execute the instructions to (par 60 “The mass storage 710 is an example of a tangible storage medium readable by the processors 701, where the software 711 is stored as instructions for execution by the processors 701 to cause the computer system 700 to operate, such as is described herein…”)
detect a scan interrupt;(par 19 “The system under test 120 includes a variety of components 106a-106N that are built to order based primarily on customer configuration requirements for the system 120. The components 106a-106N can include, but are not limited to, processor cores, memory, storage devices, battery modules, cooling systems, I/O devices, and the like. Each of these components 106a-106N are selected based on the customer need and customer pricing requirements. The configuration for this system, 101, is provided to the test controller 102 so a dynamic test decision can be made.”)
read the scan results database; (par 19 “Each of these components 106a-106N are selected based on the customer need and customer pricing requirements. The configuration for this system, 101, is provided to the test controller 102 so a dynamic test decision can be made. The system 100 also includes a knowledge base 140 that includes data about each of the components 106a-106N of the system under test 120 as well as historical data related to previous test cases, failure rates, failure modes, error types, and the like.”)
get material vintage information of the component based at least in part on the identifying information (ID); (par 20 “When the mainframe is built and sent for testing, the test controller 102 can obtain data associated with the components 106a-106N in the mainframe from the knowledge base 140.”; par 21 “In one or more embodiments of the invention, the component data taken from the knowledge base includes manufacturer data for each component 106a-106N. The manufacturer data includes historical failure rates for the individual components based on historical tests from the manufacturer and also include the vintage of the component which includes version number, manufacturing year, failure rates/modes, etc.”) and
initiate a scan of the component based at least in part on the material vintage information to detect any defects in the component.(fig 4:410; par 30 “And at block 410, the method 400 includes executing the test environment on the first system.”. par 20 “When the mainframe is built and sent for testing, the test controller 102 can obtain data associated with the components 106a-106N in the mainframe from the knowledge base 140. Based on this data, the test controller 102 can select a set of test cases from the test case database 104 and/or built one or more custom test cases for execution.”)
However, although Verburg teaches obtaining component data(par 22), Verburg does not specifically teach using a hashed identifier.
On the other hand, Walton teaches,
An apparatus comprising:
a memory to store instructions and a plurality of test patterns(fig 3:302,(4); par 30 “The storage 301 may be any suitable storage to store information related to hardware component analysis rules, such as a local or global storage. Block 302 shows hardware component identifiers stored. In one implementation, the storage 301 also stores version numbers associated with the hardware components analysis rules.”; par 32 “An identifier or version not stored in the storage 301 may indicate that new analysis rules should be retrieved. The rules update engine 302 may send a request to the field replaceable unit 300 for the updated analysis rules. The field replaceable unit 300 may respond with its current stored analysis rules. The rules analysis engine 303 may store the received rules in the storage 302.” Walton calls their test patterns “analysis rules”); and
a processor, including a scan register to store a hashed identifier (ID) of a component of a computing system(fig 3:302; par 30 “Block 302 shows hardware component identifiers stored.”; par 19 “The identifier associated with the hardware component may be any suitable identifier. For example, the identifier may be a Globally Unique ID (QUID). The identifier may be a device ID and/or vendor ID. The identifiers may be used to determine if information is stored in the storage related to the hardware component.”; par 25 “For example, the identifier may be the application name or a hash from the binary image of the application.” Walton teaches that the identifier may be any suitable identifier, and suggests a globally unique ID , the device id, vendor id, and the suggestion of hashing the application name. it would be obvious to one of ordinary skill in the art to also try hashing the component identifier like Walton does with the application name. ), coupled to the memory, to execute the instructions to
detect a scan interrupt;(fig 2:200; par 18 “In one implementation, the processor checks information related to the layout of the computing system and periodically queries the hardware components to receive identification information used to determine if the hardware component has been changed.”; par 19 “The processor may receive an identifier associated with a hardware component, such as based on a request from the processor or based on part of an initialization process associated with the hardware component.”)
query hardware components for information;(fig 2:200; par 18 “In one implementation, the processor checks information related to the layout of the computing system and periodically queries the hardware components to receive identification information used to determine if the hardware component has been changed.”;)
get material vintage information of the component based at least in part on the hashed ID;(par 19 “The identifier associated with the hardware component may be any suit able identifier. For example, the identifier may be a Globally Unique ID (QUID). The identifier may be a device ID and/or vendor ID. The identifiers may be used to determine if information is stored in the storage related to the hardware component.” Vendor ID is part of material vintage information as it identifies the vendor.) and
initiate a scan of the component based at least in part on the material vintage information to detect any defects in the component.(fig 2:203; par 28 “Continuing to 203, the processor analyzes the computing system performance based on the received analysis rule associated with the hardware component and a workload to be executed on the computing system. For example, the rules analysis engine may use stored analysis rules updated by the processor to make predictions based on a comparison of analysis rules related to components, such as both hardware and software components, and expected or actual workloads running on the computing system.”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Verburg to incorporate the hashed identifier of Walton. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Verburg -- a need for how to identify components and generate updated tests when components are updated/replaced(Walton par 7 “A large scale computer system, such as a computer system associated with a data center or a server within a data center, may include many components, both hardware and software. Large scale computer systems may include hardware components that are field replaceable units that may be replaced or added to fix problems, upgrade capabilities, or add capacity.)” -- with Walton providing a known method to solve a similar problem. Walton provides “In one implementation, analysis rules are determined for workloads and components separately such that the hardware component rules may be updated and analysis performed on a workload as compared to the updated hardware component rules. The computing system may determine analysis rules associated with hardware components by querying the hardware components and receiving a response from the hardware components about analysis rules to associate with them for system analysis.”(Walton par 8)
However, Verburg and Walton do not specifically teach a scan register to store … an indication of success or failure of a validation operation.
On the other hand, Pappu teaches,
An apparatus comprising: a memory to store instructions and a plurality of test patterns; (par 11 “In various embodiments, integrated circuits such as processors or other systems on chip (SoC) may be provided with techniques to periodically trigger and perform in-field self testing. More specifically, techniques disclosed herein provide a dynamic functional safety testing capability for fabrics and other interconnects of a processor. In embodiments, such in-field self testing may be performed according to one or more test patterns which may be obtained in different manners.”) and
a processor, including a scan register to store test operation data and an indication of success or failure of a validation operation,(fig 2:205; par 32 “Programmable register bank 205 may be configured to be programmably controlled to initiate test operations based on testing information, e.g., received from the non-volatile memory. Programmable register bank 205 further may receive testing information that is generated within fabric bridge controller 200 itself.”; par 35 “To enable fabric testing to occur, programmable register bank 205 couples to a master control circuit 212 … . In an embodiment, master control circuit 212 may be configured to put a request onto a fabric when an inject command is received from a control register of programmable register bank 205.”) coupled to the memory, to execute the instructions to;
detect a scan interrupt; (fig 4:410; par 39 “As illustrated, method 400 begins by receiving a test interrupt in the bridge controller (block 410). In an embodiment, this test interrupt signal may be received from system software.”)
read the scan register; (fig 2:205; par 32 “Programmable register bank 205 may be configured to be programmably controlled to initiate test operations based on testing information, e.g., received from the non-volatile memory. Programmable register bank 205 further may receive testing information that is generated within fabric bridge controller 200 itself.”; par 35 “To enable fabric testing to occur, programmable register bank 205 couples to a master control circuit 212 … . In an embodiment, master control circuit 212 may be configured to put a request onto a fabric when an inject command is received from a control register of programmable register bank 205.”)
get testing information of the component based at least in part on test content from an external storage(fig 4:415,425; par 42 “If instead it is determined that internal test generation is to not occur, control proceeds to block 425 where the test content may be received from an external storage (e.g., a given non-volatile memory of the system). Next it is determined whether this test content is validated ( diamond 430).”); and
initiate a scan of the component based at least in part on the component test information to detect any defects in the component.(fig 4:440; par 43 “control passes to block 440 where the test content is sent to one or more fabrics.”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Verburg and Walton to incorporate the scan register to store an indication of success or failure of a validation operation of Pappu. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Verburg and Walton -- a need for testing components after they have been deployed to the field(Pappu par 3 “The term "walking dead unit" refers to an IC that passes initial product testing and bum-in, but fails when used in the field. Current product testing schemes include manufacturing-based testing by a manufacturer of the IC and/or a platform manufacturer that incorporates the IC within a given platform. However, such testing is inadequate once an IC is dispositioned into a platform and sent into the field for normal operation.”) -- with Pappu providing a known method to solve a similar problem. Pappu provides “In various embodiments, integrated circuits such as processors or other systems on chip (SoC) may be provided with techniques to periodically trigger and perform in-field self testing. More specifically, techniques disclosed herein provide a dynamic functional safety testing capability for fabrics and other interconnects of a processor. In embodiments, such in-field self testing may be performed according to one or more test patterns which may be obtained in different manners.”(Pappu par 11)
Regarding claim 2, Verburg, Walton, and Pappu teaches,
The apparatus of claim 1,
Verburg further teaches,
wherein the material vintage information comprises at least one of manufacturing process version, manufacturing bin, and date code of manufacturing of the component.(par 16 “The component data includes data associated with the vintage of the component which includes year, manufacturing lot, and the like.”)
Regarding claim 3, Verburg, Walton, and Pappu teaches,
The apparatus of claim 1,
Verburg further teaches,
comprising the processor to receive vendor provided material vintage information of the component based at least in part on the component information.(par 30 “The component data includes vintage information from the manufacturer which includes failure rates, failure modes, and the like.”)
However, although Verburg gets vendor provided vintage information, Verburg does not specifically teach “the processor to get the material vintage information from a vendor of the component.”(interpreted as the processor going to the vendor to get information). Also, although Verburg uses a component identifying information, Verburg does not use a hashed ID.
On the other hand, Walton teaches,
comprising the processor to get the analysis rules information from a vendor of the component(par 26 “In one implementation, the processor may attempt to retrieve analysis rules in another manner when analysis rules are not received from the hardware component. For example, the processor may attempt to find analysis rules from the Internet, from a manufacturer website, or other rule engines.”) based at least in part on the hashed ID. (par 19 “The identifier associated with the hardware component may be any suitable identifier. For example, the identifier may be a Globally Unique ID (QUID). The identifier may be a device ID and/or vendor ID. The identifiers may be used to determine if information is stored in the storage related to the hardware component.”)
Regarding claim 5, Verburg, Walton, and Pappu teaches,
The apparatus of claim 1,
Pappu further teaches,
wherein the scan register comprises a model specific register (MSR) of the processor.(fig 2:205; par 32 “Programmable register bank 205 may be configured to be programmably controlled to initiate test operations based on testing information, e.g., received from the non-volatile memory. Programmable register bank 205 further may receive testing information that is generated within fabric bridge controller 200 itself.”; par 35 “To enable fabric testing to occur, programmable register bank 205 couples to a master control circuit 212 … . In an embodiment, master control circuit 212 may be configured to put a request onto a fabric when an inject command is received from a control register of programmable register bank 205.”)
Regarding claim 6, it is the method that the apparatus of claim 1 implements and is rejected for the same reasons.
Regarding claim 7, it is the method that the apparatus of claim 2 implements and is rejected for the same reasons.
Regarding claim 8, Verburg, Walton, and Pappu teaches,
The method of claim 6,
Pappu further teaches,
comprising validating a scan configuration by comparing a first checksum of the scan interrupt with a second checksum of the scan register and setting a failure indicator in the scan register(fig 2:205; par 32 “Programmable register bank 205 may be configured to be programmably controlled to initiate test operations based on testing information, e.g., received from the non-volatile memory. Programmable register bank 205 further may receive testing information that is generated within fabric bridge controller 200 itself.”;) when the first checksum does not match the second checksum.(par 42 “In an embodiment, content is checked for its authenticity to ensure that it has not been tampered with. The checksum of the loaded test program is compared against the expected value and if any discrepancy is found, program execution is terminated. This checksum computation provides a unique signature that distinguishes one digital footprint from another. If authenticity is not found, control passes to block 435 where an error may be raised, e.g., to the firmware that triggered the self testing.”)
Regarding claim 9, Verburg, Walton, and Pappu teaches,
The method of claim 7,
Walton further teaches,
comprising getting the material vintage information from a vendor of the component (par 26 “In one implementation, the processor may attempt to retrieve analysis rules in another manner when analysis rules are not received from the hardware component. For example, the processor may attempt to find analysis rules from the Internet, from a manufacturer website, or other rule engines.”) based at least in part on the hashed ID. (par 19 “The identifier associated with the hardware component may be any suit able identifier. For example, the identifier may be a Globally Unique ID (QUID). The identifier may be a device ID and/or vendor ID. The identifiers may be used to determine if information is stored in the storage related to the hardware component.”)
Regarding claim 14, Verburg, Walton, and Pappu teaches,
The method of claim 6,
Verburg further teaches,
getting the material vintage information based at least in part on the component ID. (par 30 “The component data includes vintage information from the manufacturer which includes failure rates, failure modes, and the like.”)
Walton further teaches,
comprising masking the hashed ID(par 19 “The identifier associated with the hardware component may be any suitable identifier. For example, the identifier may be a Globally Unique ID (QUID). The identifier may be a device ID and/or vendor ID. The identifiers may be used to determine if information is stored in the storage related to the hardware component.”) and getting the material analysis information based at least in part on the masked hashed ID. (par 26 “In one implementation, the processor may attempt to retrieve analysis rules in another manner when analysis rules are not received from the hardware component. For example, the processor may attempt to find analysis rules from the Internet, from a manufacturer website, or other rule engines.”)
Regarding claim 15, Verburg, Walton, and Pappu teaches,
The method of claim 6,
Verburg further teaches,
comprising getting the material vintage information from the scan register.(par 22 “The test controller 102 can obtain component data for each of these specific components to determine what test cases to run. This component data can include manufacturer vintage information for parts, failure rates, failure modes as wells as the manufacturing history for similar parts including test times, number of failures in the test steps and/or include development test results for each component.”)
Regarding claim 17, Verburg, Walton, and Pappu teaches,
The method of claim 6,
Verburg further teaches,
comprising determining a test pattern to be applied during performance of the scan based at least in part on the material vintage information.(par 17 “The dynamic testing system can take this memory vintage information into consideration when determining what test case to execute as well as how long to execute a test case on the specific component.”)
Regarding claim 18, it is the non-transitory machine-readable storage medium comprising instructions that the system of claim 1 implements and is rejected for the same reasons.
Regarding claim 19, Verburg, Walton, and Pappu teaches,
The at least one least one non-transitory machine-readable storage medium of claim 18,
Walton further teaches,
comprising instructions that, when executed, cause at least one processor to get the material vintage information from a vendor of the component(par 26 “In one implementation, the processor may attempt to retrieve analysis rules in another manner when analysis rules are not received from the hardware component. For example, the processor may attempt to find analysis rules from the Internet, from a manufacturer website, or other rule engines.”) based at least in part on the hashed ID. (par 19 “The identifier associated with the hardware component may be any suit able identifier. For example, the identifier may be a Globally Unique ID (QUID). The identifier may be a device ID and/or vendor ID. The identifiers may be used to determine if information is stored in the storage related to the hardware component.”)
Regarding claim 20, it is the machine-readable storage containing instructions that implement the method of claim 15 and is rejected for the same reasons.
Regarding claim 22, it is the machine-readable storage containing instructions that implement the method of claim 17 and is rejected for the same reasons.
Claim(s) 4,10-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230015477 A1 (Verburg), US 20170249229 A1 (Walton), and US 20190033367 A1 (Pappu). as applied to claims 1,6 above, and further in view of US 20210209012 A1(Umberhocker).
Regarding claim 4, Verburg, Walton, and Pappu teaches,
The apparatus of claim 1,
Verburg further teaches,
comprising the processor to store a result of the scan, including any detects detected in the component by performance of the scan, the hashed ID, and the material vintage information in a knowledge base coupled to the computing system.(par 21 “In addition, historical testing data taken from previous tests by the test controller 102 can be stored in the knowledge base 140.”)
However, although Verburg stores scan results, Verburg, Walton, and Pappu do not specifically teach storing scan results in a block chain.
On the other hand, Umberhocker teaches,
A secure database for storing actions taken regarding a source file, including test deployments, test results, and details about the test setup and components;(par 2 “According to one embodiment of the present invention, a method is provided that includes: creating a secure database for actions taken regarding a source file that is stored on a first computer; creating a test executable from one or more source files and storing it on the first computer; finalizing the source file for test on a second computer different from the first computer; hashing a test environment related to the source file and the second computer; and in response to determining that a version of the test executable provided to the second computer matches a version of the test executable provided to the secure database: executing the test executable on the second computer; hashing test results from testing the source file on the second computer; and adding the test executable as hashed and the test results as hashed to the secure database to actions already stored in the secure database.”)
comprising the processor to store a result of the scan, including any detects detected in the component by performance of the scan, test deployments, and the material vintage information in a block chain coupled to the computing system.(par 25 “The secure database 350 maintains an immutable and secure record of various actions taken with respect to a source file 330, including, but not limited to: …, test deployments (including test results), operational deployments, and metadata related to testing conditions ( e.g., when the computing device running the test was certified as secure). In some embodiments, the secure database 350 stores the records in a blockchain, which allows for the verifiable reconstruction of the chain when an audit is performed.”.)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Verburg, Walton, and Pappu to incorporate the blockchain storage of Umberhocker. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Verburg, Walton, and Pappu -- a need for how to safely store test results(Umberhocker par 1 “During software development, among other computerized activities, continuous testing and verification is performed to ensure the robustness of the end product or service. The various steps in the testing and verification process are logged for later confirmation ( e.g., as part of a security or compliance audit); however, logs can be manipulated or faked-leading to inaccurate or unreliable logs of the actions taken ( or not taken) and the user who performed (or failed to perform) various sub-tasks.”) -- with Umberhocker providing a known method to solve a similar problem. Umberhocker provides “a method is provided that includes: creating a secure database for actions taken regarding a source file that is stored on a first computer; creating a test executable from one or more source files and storing it on the first computer; finalizing the source file for test on a second computer different from the first computer; hashing a test environment related to the source file and the second computer; and in response to determining that a version of the test executable provided to the second computer matches a version of the test executable provided to the secure database: executing the test executable on the second computer; hashing test results from testing the source file on the second computer; and adding the test executable as hashed and the test results as hashed to the secure database to actions already stored in the secure database.”(Umberhocker par 2)
Regarding claim 10, Verburg, Walton, and Pappu teaches,
The method of claim 6,
Verburg further teaches,
comprising storing a result of the scan, including any detects detected in the component by performance of the scan, the hashed ID, and the material vintage information in a knowledge base coupled to the computing system.(par 21 “In addition, historical testing data taken from previous tests by the test controller 102 can be stored in the knowledge base 140.”)
However, although Verburg stores scan results, Verburg, Walton, and Pappu do not specifically teach storing scan results in a decentralized database.
On the other hand, Umberhocker teaches,
A secure database for storing actions taken regarding a source file, including test deployments, test results, and details about the test setup and components;(par 2 “According to one embodiment of the present invention, a method is provided that includes: creating a secure database for actions taken regarding a source file that is stored on a first computer; creating a test executable from one or more source files and storing it on the first computer; finalizing the source file for test on a second computer different from the first computer; hashing a test environment related to the source file and the second computer; and in response to determining that a version of the test executable provided to the second computer matches a version of the test executable provided to the secure database: executing the test executable on the second computer; hashing test results from testing the source file on the second computer; and adding the test executable as hashed and the test results as hashed to the secure database to actions already stored in the secure database.”)
comprising storing a result of the scan, including any detects detected in the component by performance of the scan, test deployments, and the material vintage information in a decentralized database coupled to the computing system. (par 25 “The secure database 350 maintains an immutable and secure record of various actions taken with respect to a source file 330, including, but not limited to: …, test deployments (including test results), operational deployments, and metadata related to testing conditions ( e.g., when the computing device running the test was certified as secure). In some embodiments, the secure database 350 stores the records in a blockchain, which allows for the verifiable reconstruction of the chain when an audit is performed.”.)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Verburg, Walton, and Pappu to incorporate the distributed storage of Umberhocker. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Verburg, Walton, and Pappu -- a need for how to safely store test results(Umberhocker par 1 “During software development, among other computerized activities, continuous testing and verification is performed to ensure the robustness of the end product or service. The various steps in the testing and verification process are logged for later confirmation ( e.g., as part of a security or compliance audit); however, logs can be manipulated or faked-leading to inaccurate or unreliable logs of the actions taken ( or not taken) and the user who performed (or failed to perform) various sub-tasks.”) -- with Umberhocker providing a known method to solve a similar problem. Umberhocker provides “a method is provided that includes: creating a secure database for actions taken regarding a source file that is stored on a first computer; creating a test executable from one or more source files and storing it on the first computer; finalizing the source file for test on a second computer different from the first computer; hashing a test environment related to the source file and the second computer; and in response to determining that a version of the test executable provided to the second computer matches a version of the test executable provided to the secure database: executing the test executable on the second computer; hashing test results from testing the source file on the second computer; and adding the test executable as hashed and the test results as hashed to the secure database to actions already stored in the secure database.”(Umberhocker par 2)
Regarding claim 11, Verburg, Walton, Pappu, and Umberhocker teaches,
The method of claim 10,
Umberhocker further teaches,
wherein the decentralized database comprises a block chain. (par 25 “The secure database 350 maintains an immutable and secure record of various actions taken with respect to a source file 330, including, but not limited to: …, test deployments (including test results), operational deployments, and metadata related to testing conditions ( e.g., when the computing device running the test was certified as secure). In some embodiments, the secure database 350 stores the records in a blockchain, which allows for the verifiable reconstruction of the chain when an audit is performed.”.)
Regarding claim 12, Verburg, Walton, Pappu, and Umberhocker teaches,
The method of claim 10,
Walton further teaches,
comprising at least one of reducing functionality of the component and deactivating the component when a defect is detected in the component by performance of the scan.(par 1 “In one implementation, the components may be analyzed in comparison to a previous workload and its outcome, such as to determine types of changes to make to the computing system and/or to individual components within the computing system.”. par 28 “The rules analysis engine may make suggestions for changes in hardware components or configurations based on the analysis.” Walton teaches determining what changes to make to the system based on analyzing system performance. )
Claim(s) 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230015477 A1 (Verburg), US 20170249229 A1 (Walton), and US 20190033367 A1 (Pappu) as applied to claim 6 above, and further in view of US 20190286825 A1 (Ponnuru).
Regarding claim 13, Verburg, Walton, and Pappu teaches,
The method of claim 6,
However, Verburg, Walton, and Pappu does not specifically teach initiating the scan by one of a baseboard management controller (BMC) and a Trusted Execution Environment (TEE) of the computing system.
On the other hand, Ponnuru teaches,
A datacenter management system that, based on a compliance template, determines a set of compliance tests for the information handling resources, executes the set of tests, and reports the results,(par 8 “The information handling system may further be configured to, based on the compliance template and a compliance standard, determine a set of compliance tests for the information handling resource. The information handling system may be further configured to execute the set of compliance tests, and, in response to a failure of at least one test of the set of compliance tests, provide an indication of the failure.”)
comprising initiating the scan by one of a baseboard management controller (BMC)(par 31 “In certain embodiments, management controller 112 may include or may be an integral part of a baseboard management controller (BMC),”) and a Trusted Execution Environment (TEE) of the computing system(par 38 “Security functions 202 may be accessible via a Trusted Platform Module (TPM), BIOS or other firmware, drivers, an operating system, application programs, or any other suitable manner.”. Table 1 Operational Environment: Security level 3 “Referenced PPs plus trusted path evaluated at EAL3 plus security policy modeling”) and performing the scan by a defect scanner of the component.(Table 1: Self-Tests “Power-up tests: cryptographic algorithm tests, software/firmware integrity tests, critical functions tests. Conditional tests.”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Verburg, Walton, and Pappu to incorporate the baseboard management controller and trusted environment of Ponnuru. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Verburg, Walton, and Pappu -- a need for how to make sure components in the system are operating properly(Ponnuru par 4 “Meeting security compliance policies has typically required organizations to conduct manual certification exercises at frequent intervals, which is costly in terms of both time and resources. Hardware, firmware, and software changes exacerbate the need for frequent manual certification exercises, as do any changes in the compliance requirements themselves. The lack of a standardized and automated compliance verification framework also leads to frequent non-compliance scenarios.”) -- with Ponnuru providing a known method to solve a similar problem. Ponnuru provides “provides techniques that may be employed to assist management of information handling systems in these and other situations.”(Ponnuru par 5) by running compliance tests on components (Ponnuru par 8 “The information handling system may further be configured to, based on the compliance template and a compliance standard, determine a set of compliance tests for the information handling resource. The information handling system may be further configured to execute the set of compliance tests, and, in response to a failure of at least one test of the set of compliance tests, provide an indication of the failure.”)
Claim(s) 16,21 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230015477 A1 (Verburg), US 20170249229 A1 (Walton), and US 20190033367 A1 (Pappu) as applied to claim 6,18 above, and further in view of US 11822688 B1 (Legault).
Regarding claim 16, Verburg, Walton, and Pappu teaches,
The method of claim 6,
However, Verburg, Walton, and Pappu do not specifically teach comprising determining a visibility level of the hashed ID.
On the other hand, Legault teaches,
A system that stores variable domain data and provides access to the data based on the user’s access level;(col 5 ln 50-63 “In one embodiment of the present invention, a method of providing controlled, electronic access to variable domain data stored in a data processing system includes receiving information from a principal that includes information identifying the principal. The method also includes performing one or more logical relationship operations on a data security model and a variable domain data model using security attributes of the data security model to determine a level of resource data access to be granted to the principal, wherein the data security model and the variable domain model share a common logical relationship data structure and granting the principal access to the resource data in accordance with the determined level of resource data access to be granted to the principal.”)
comprising determining a visibility level of the hashed ID.(fig 8; col 10 ln 17-30 “Operation 810 determines the security access level (i.e. the scope of security access) to grant the requesting principal. Configuration space intersection process 812 represents one embodiment of operation 810. Configuration space intersection process 812 performs an intersection operation between a data security model configuration space and the configuration model configuration space in, for example, the manner previously described. Where the configuration and security model configuration spaces overlap, the principal will be granted a level of access to the data. The principal will be denied the level of access to the data where no overlap exists.”)
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further modify Verburg, Walton, and Pappu to incorporate the determining a visibility level of the domain data of Legault. One of ordinary skill in the art would have been motivated to remedy the shortcomings of Verburg, Walton, and Pappu -- a need for how to limit users access to only the data that they should have access to(Legault col 1 ln 15-25 “Data security issues often exist at the forefront of technology challenges faced by entities that maintain, use, view, manipulate, and otherwise access data. Often entities desire to grant different levels of access to different subsets of data to different principals. For example, in a product configuration context, it may be desirable to allow some principals to access one set of data and allow others principals to access another set of data, and the data represents a single, data configuration space.” -- with Legault providing a known method to solve a similar problem. Legault provides “a method of providing controlled, electronic access to variable domain data stored in a data processing system includes receiving information from a principal that includes information identifying the principal. The method also includes performing one or more logical relationship operations on a data security model and a variable domain data model using security attributes of the data security model to determine a level of resource data access to be granted to the principal, wherein the data security model and the variable domain model share a common logical relationship data structure and granting the principal access to the resource data in accordance with the determined level of resource data access to be granted to the principal.”(Legault col 5 ln 50-63)
Regarding claim 21, it is the machine-readable storage containing instructions that implement the method of claim 16 and is rejected for the same reasons.
Response to Arguments
Applicant's arguments, see remarks pg. 6-8, filed 12/29/2025, regarding the rejection of claims 1-22 under 35 U.S.C. 101 have been fully considered but they are not persuasive.
With respect to the independent claims, the applicant has argued that the claim improves detecting defects in components made by the silicon manufacturing process and should be eligible subject matter under step 2A prong two because the claim recites additional elements that integrate the judicial exception into a practical application. Applicant further explains that “vintage” refers to descriptive information identifying a component of a computing system, and that the vintage information could include components created by a silicon manufacturing process. The examiner respectfully disagrees. The specification defines “vintage” in paragraph 12 of the specification. (spec. par 12 ”As used herein "vintage" refers to descriptive information identifying a component of a computing system, such as one or more of wafer material, manufacturing process node, manufacturing process version information, manufacturing bin, manufacturing lot, date of manufacture, time of manufacture, component type, product type, product version, date code of manufacturing, information regarding whether any processing cores are de-featured or all processing cores are enabled, etc. Components of the same vintage have common characteristics and may have similar DPM rates.”). Although “material vintage information of the component” could include components created by a silicon manufacturing process(“wafer material … whether any processing cores are de-featured or all processing cores are enabled,”), the vintage definition “descriptive information identifying a component of a computing system“ is recited at a level of generality that could include any component in a computing system, not just silicon manufacturing processes(spec par 12 “such as one or more of … manufacturing process node, manufacturing process version information, manufacturing bin, manufacturing lot, date of manufacture, time of manufacture, component type, product type, product version, date code of manufacturing, … etc.”). The wafer material and processing core descriptive information are only examples, are not always required, and do not restrict the scope of the claim. Since the descriptive information does not have to be tied to something specific to the field of silicon manufacturing processes, the claim currently covers all components, including components not produced by a silicon manufacturing process, so limitation “material vintage information of the component” does not tie the judicial exception beyond generic components in a computing system. [See MPEP 2106.04(d)(1) “Evaluating Improvements in the Functioning of a Computer, or an Improvement to Any Other Technology or Technical Field in Step 2A Prong Two”]
In general, the additional elements in step 2A prong two are broadly data gathering, data sending, and storage limitations(“store instructions and a plurality of test patterns;”, “store a hashed identifier (ID) of a component of a computing system and an indication of success or failure of a validation operation”, “read the scan register”, and “get material vintage information of the component”), generic components(processor, memory, scan register, component of a computing system), or a “scan of the component”, which does not tie the judicial exception beyond generic components in a computing system. [See MPEP 2106.04(d)(1) “Evaluating Improvements in the Functioning of a Computer, or an Improvement to Any Other Technology or Technical Field in Step 2A Prong Two”]
Applicant’s arguments, see remarks pg. 8-10, filed 12/29/2025, with respect to the rejection(s) of claim(s) 1-3,6-7,9,14-15,17-20,22 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20230015477 A1 (Verburg) in view of US 20170249229 A1 (Walton) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of US 20230015477 A1 (Verburg) in view of US 20170249229 A1 (Walton) and US 20190033367 A1 (Pappu).
With respect to the independent claims, the applicant has argued that Verburg and Walton do not teach limitation “scan register”, explaining that Verburg and Walton do not teach storing scan information in a register. Applicant further explains that Verburg stores test data in a test controller and knowledge base, which are external from the component, and that Verburg does not mention the word “register”.
The examiner respectfully disagrees. Verburg teaches, in the cited (par 22 “The test controller 102 can obtain component data for each of these specific components to determine what test cases to run. This component data can include manufacturer vintage information for parts, failure rates, failure modes as wells as the manufacturing history for similar parts including test times, number of failures in the test steps and/or include development test results for each component.”; This shows that the components are identifiable, as manufacturer vintage information could include things like serial number, or some other kind of ID. par 27 “The learning test case 302 can be selected based on the system configuration 101 of the system under test and the component data 306 for the system, using information from the test case database 104 and the knowledge base 140, also referenced in FIG. 1, to select tests that may be likely to detect any potential defects in the SUT.”). What Verburg teaches is a test controller that collects and stores identifying information of a component. Walton teaches, in the cited (fig 2:200; par 18 “In one implementation, the processor checks information related to the layout of the computing system and periodically queries the hardware components to receive identification information used to determine if the hardware component has been changed.”;). What Walton teaches is that the hardware components store the identification information about the hardware component. Together, they teach storing component identifying data(like a hashed component id). Although Verburg and Walton do not explicitly label their component identifying data storage structure a “register”, combined, Verburg and Walton describe a storage location that “store a hashed identifier (ID) of a component of a computing system”. The examiner interprets this combination as limitation “a scan register to store a hashed identifier (ID) of a component of a computing system”. The claim does not require that the scan register be a standalone hardware register, only that it stores a hashed identifier of a component of a computing system.
With respect to the independent claims, the applicant has also argued that Verburg and Walton does not teach the amended limitation “a scan register to store … an indication of success or failure of a validation operation.”. The examiner respectfully disagrees. The newly cited Pappu teaches in the cited (fig 2:205; par 32 “Programmable register bank 205 may be configured to be programmably controlled to initiate test operations based on testing information, e.g., received from the non-volatile memory. Programmable register bank 205 further may receive testing information that is generated within fabric bridge controller 200 itself.”). The examiner interprets this as limitations “a scan register to store … and an indication of success or failure of a validation operation”.
With respect to the independent claims, the applicant has also argued that Verburg and Walton do not teach limitation “a hashed identifier (ID) of a component of a computing system”. The examiner respectfully disagrees. Walton teaches in the cited (fig 3:302; par 30 “Block 302 shows hardware component identifiers stored.”; par 19 “The identifier associated with the hardware component may be any suitable identifier. For example, the identifier may be a Globally Unique ID (QUID). The identifier may be a device ID and/or vendor ID. The identifiers may be used to determine if information is stored in the storage related to the hardware component.”; par 25 “For example, the identifier may be the application name or a hash from the binary image of the application.” Walton teaches that the identifier may be any suitable identifier, and suggests a globally unique ID , the device id, vendor id, and the suggestion of hashing the application name. it would be obvious to one of ordinary skill in the art to also try hashing the component identifier like Walton does with the application name. The examiner interprets this as limitations “a hashed identifier (ID) of a component of a computing system”.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20200132762 A1 - Kukreja - uses scans to test different power domains in a system on chip environment.
US 10247773 B2 - Menon - tests wireless devices
US 11714744 B2 - Gross - runs tests and performs actions based on the results of the tests. Identifies resource performance during tests.
US 20220342738 A1 - Samuel - Optimized diagnostic plan for a system. Uses ML to determine which diagnostic tests to run on endpoints. - Favorite so far
US 20220197765 A1 - Zhang - proposed device update patch, system selects tests to run to test the new patch based on previous tests and experience.
US 11010285 B2 - Hicks - combinatorial test cases where running test cases generates more test cases. System under test can be hardware or software.
US 20210117297 A1 - Ryan - picks tests from database based on error message and tests components individually.
US 20210109847 A1 - Jaganmohan - responds to errors in test scripts with an robot handler based on the exception type.
US 10977163 B1 - Rhodes - integrated computing system test management. Not much about failures.
US 20200250077 A1 - Bergman - tests computing resources, analyzes which resources were missed and generates new tests that cover the missing resources.
US 20200241945 A1 - Sterioff - receive error message, send diagnostic tests, receive diagnostic results, send remedial actions.
US 20200097347 A1 - Mahindru - deep diagnostics and health checking of resources in distributed data centers.
US 20220206978 A1 - Kim - system on chip testing of component during runtime, interrupts tests while they are running too.
US 20190033367 A1 - Pappu - test interrupt to run tests on fabrics of a processor. Intel.
US 20230015477 A1 - Verburg - vintage information from manufacturer par 30; selects tests based on system data which includes component vintage information.
US 20230056727 A1 - Andrews - hashes component to make sure component hasn't changed later. Tests system on startup through BIOS.
US 20170249229 A1 - Walton - uses hashing for identifying application stuff, also has hardware component identification. Par 19 "The identifier associated with the hardware component may be any suitable identifier. For example, the identifier may be a Globally Unique ID (QUID)."
US 20220413950 A1 - Ott - baseboard management controller predicts storage failure, isolates storage, and runs extra tests on it
US 20230027315 A1 - Liu - secure boot policy. PSP verifies that boot code is valid before running. BMC can test components
US 20180262481 A1 - Doi - confidential information in a document. confidential attributes include Ids and visibility levels of confidential information par 29,31,48
US 20160299771 A1 - Navarro - visibility levels in design documents.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL XU whose telephone number is (571)272-5688. The examiner can normally be reached Monday-Friday 8:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at (571) 272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.X./Examiner, Art Unit 2113
/MARC DUNCAN/Primary Examiner, Art Unit 2113