Prosecution Insights
Last updated: April 19, 2026
Application No. 18/744,881

AUTOMATICALLY DETECTING AND MITIGATING RISKS ASSOCIATED WITH INSTALLING A SOFTWARE PACKAGE ON A COMPUTER SYSTEM

Non-Final OA §101§103
Filed
Jun 17, 2024
Examiner
DOAN, TAN
Art Unit
2445
Tech Center
2400 — Computer Networks
Assignee
Red Hat Inc.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
225 granted / 311 resolved
+14.3% vs TC avg
Strong +25% interview lift
Without
With
+25.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
32 currently pending
Career history
343
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
57.3%
+17.3% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
14.9%
-25.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 311 resolved cases

Office Action

§101 §103
DETAILED ACTION Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites generating a plurality of severity scores for the software components; generating a risk score based on the severity scores; and determining an alternative software component. The limitation of generating a plurality of severity scores for the software components; generating a risk score based on the severity scores; and determining an alternative software component, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “by one or more processors,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “by one or more processors” language, the “generating severity scores” and “generating a risk score” in the context of this claim encompasses the user manually calculating the scores. Similarly, the limitation of determining an alternative software component, as drafted, is a judgment that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. In particular, the claims only recite one additional element – outputting a notification that is well-understood, routine, conventional activity. The processor in both steps is recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Regarding claims 2, 11 and 18, the limitation installing the software package is well-understood, routine, conventional activity. Regarding claims 3, 12 and 19, the limitation a different version of the particular software component is well-understood, routine, conventional element. Regarding claims 4, 13 and 20, the limitation mapping based on similarities is an observation that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. Regarding claims 5 and 14, the limitation outputting a notification is well-understood, routine, conventional activity. Regarding claims 6 and 15, the limitation retrieving, analyzing the source code; and generating the risk score is an evaluation that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. Regarding claims 7 and 16, the limitation a performance metric is well-understood, routine, conventional element. Regarding claims 8 and 17, the limitation a programming error is well-understood, routine, conventional element. Regarding claim 9, the limitation an image file inside a container is well-understood, routine, conventional element. Claim Rejections - 35 USC§ 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wyatt et al. (US20130326477A1) in view of Jevans (US20160127367A1) and Mishra et al. (US20210216643A1). Regarding claim 1, Wyatt discloses a method comprising ([Abstract] shows detection of application security risks; para [0036] shows an owner of a mobile device may visit a marketplace 123 and select an application for remote installation on mobile device 147): receiving, by one or more processors (para [0010]), a request associated with installing a software package on a computing device (para [0036] shows an owner of a mobile device may visit a marketplace 123 and select an application for remote installation on mobile device 147), wherein the software package includes a plurality of software components ([Abstract] shows software applications composed of multiple components. Software applications are analyzed to determine their components and to identify the behaviors associated with each of the components; para [0110] shows static analysis may involve looking at the code inside of a component; para [0111] shows dynamic analysis may determine whether a component has a particular behavior); and in response to receiving the request: generating, by the one or more processors, a plurality of severity scores for the plurality of software components, each severity score of the plurality of severity scores corresponding to a respective software component of the plurality of software components and indicating a severity of one or more vulnerabilities associated with the respective software component (para [0110] shows static analysis may involve looking at the code inside of a component; para [0111] shows dynamic analysis may determine whether a component has a particular behavior; para [0138] shows a data repository storing component data for known components; para [0146] shows the component data includes risk scores for known components; para [0173] shows a behavior is determined to have a low risk score; para [0175] shows a component is detected to include unacceptable code). Wyatt discloses the method providing a risk score regarding an application to be installed on the computing device of the user (para [0146]) but fails to teach: generating, by the one or more processors, a risk score for the software package based on the plurality of severity scores, the risk score representing an overall level of risk associated with installing the software package on the computing; determining, by the one or more processors, that an alternative software component is correlated in a predefined mapping with a particular software component of the plurality of software components; determining, by the one or more processors, a first severity score for the particular software component, the first severity score being among the plurality of severity scores; determining, by the one or more processors, a second severity score for the alternative software component; determining, by the one or more processors that the second severity score is lower than the first severity score; and outputting, by the one or more processors, and prior to the computing device installing the software package, a notification indicating the risk score and the alternative software component. However, Jevans discloses (para [0098] shows the application risk control system 504 determines when the mobile device installs an application that poses a risk to the enterprise network 520 (or even possibly the mobile device 510); para [0054] shows the system 105 can perform static analyses, behavior analysis, and dynamic analyses): generating, by the one or more processors, a risk score for the software package based on the plurality of severity scores, the risk score representing an overall level of risk associated with installing the software package on the computing (para [0059] shows the system to calculate a component risk score; para [0065] shows a combination or composite (risk) score may be determined that summarizes the component risk score; para [0080] shows calculating a risk score for the application. The general risk score can be a composite of these individual risk scores, selected combinations of the individual scores, weighted portions of individual scores, and combinations thereof). It would have been obvious to one of ordinary skill in the at the time the invention was effectively filed to modify the method of Wyatt with the teaching of Jevans in order to automatically remediate the application if the risk score for the application meets or exceeds a risk score threshold (Jevans; para [0081]). Wyatt-Jevans as combined fails to teach: determining, by the one or more processors, that an alternative software component is correlated in a predefined mapping with a particular software component of the plurality of software components; determining, by the one or more processors, a first severity score for the particular software component, the first severity score being among the plurality of severity scores; determining, by the one or more processors, a second severity score for the alternative software component; determining, by the one or more processors that the second severity score is lower than the first severity score; and outputting, by the one or more processors, and prior to the computing device installing the software package, a notification indicating the risk score and the alternative software component. However, Mishra discloses ([Abstract] shows the vulnerability corresponding to a software package is evaluated prior to installation; alternative software packages are identified; Mishra; para [0048] shows security-related software flaws, misconfigurations, product names, and impact metrics): determining, by the one or more processors, that an alternative software component is correlated in a predefined mapping with a particular software component of the plurality of software components (para [0035] shows software package analyzer 218 also identifies alternative software packages 236 that are related to software package 230 (e.g., having same or similar bag of words models)); determining, by the one or more processors, a first severity score for the particular software component, the first severity score being among the plurality of severity scores; determining, by the one or more processors, a second severity score for the alternative software component; determining, by the one or more processors that the second severity score is lower than the first severity score (para [0035] shows software package analyzer 218 calculates exploitability score 240 for each respective alternative software package; para [0049] shows to rank these alternate software packages from least vulnerable to most vulnerable based a calculated exploitability score corresponding to each respective alternative software package and provide an exploitability score on a defined scale, such as, for example, 1 to 10, for each identified vulnerability; para [0073] shows the computer generates rankings corresponding to calculated exploitability scores of the related alternative software packages); and outputting, by the one or more processors, and prior to the computing device installing the software package, a notification indicating the risk score and the alternative software component ([Abstract] shows insights are generated based on rankings corresponding to calculated exploitability scores of the related alternative software packages; para [0030] shows software package analyzer 218 to generate textual insights and recommendations. By providing the insights, natural language generation component 222 enables a user to make an informed decision as to whether to install the software package or install an alternative software package.) It would have been obvious to one of ordinary skill in the at the time the invention was effectively filed to modify the method of Wyatt-Jevans with the teaching of Mishra in order to enable a user to make an informed decision as to whether to install the software package or install an alternative software package (Mishra; para [0030]). Regarding claim 2, Wyatt-Jevans-Mishra as applied to claim 1 discloses: based on determining that the second severity score is lower than the first severity score, installing the software package with the alternative software component rather than the particular software component (Mishra; para [0072] shows the computer identifies related alternative software packages corresponding to the software package to be installed on the data processing system based on a comparative analysis between alternative software packages and the software package; para [0073] shows the computer generates rankings corresponding to calculated exploitability scores of the related alternative software packages.) Regarding claim 3, Wyatt-Jevans-Mishra as applied to claim 1 discloses the alternative software component is a different version of the particular software component (Wyatt; para [0167] shows that the component in the new application is a newer version or a tampered-version. Mishra; para [0072] shows the computer identifies related alternative software packages corresponding to the software package to be installed on the data processing system based on a comparative analysis between alternative software packages and the software package.) Regarding claim 4, Wyatt-Jevans-Mishra as applied to claim 1 discloses the alternative software component is correlated to the particular software component in the predefined mapping based on functional similarities between the alternative software component and the particular software component (Wyatt; para [0167] shows a newer version or a tampered-version; para [0169] shows determining when component has changed behavior (i.e., the actual behavior is different from the known behavior stored in the component identity). A behavior change may also be associated with a code fingerprint having changed slightly. Mishra; para [0055] shows the computer system to rank the available related alternative software packages based on how closely they match the software package.) Regarding claim 5, Wyatt-Jevans-Mishra as applied to claim 1 discloses outputting, as part of the notification, at least one difference between the particular software component and the alternative software component (Mishra; para [0030] shows software package analyzer 218 to generate textual insights and recommendations. By providing the insights, natural language generation component 222 enables a user to make an informed decision as to whether to install the software package or install an alternative software package; para [0036] shows insights 242 are based on analysis of software package 230 and each respective alternative software package by machine learning component 220; para [0056] shows the machine learning algorithm to find related alternative software packages, which have matching or similar bag of words models.) Regarding claim 6, Wyatt-Jevans-Mishra as applied to claim 1 discloses generating the risk score [vulnerability score] by: retrieving source code associated with the software package; generating a quality score [exploitability score] associated with the software package by analyzing the source code; and generating the risk score based on the quality score (Wyatt; para [0110] shows static analysis may also be used that involve looking at the code inside of a component; para [0171] shows actual code that is itself may be asking for inappropriate behavior. Mishra; para [0061] shows the computer, using a regression machine learning algorithm, calculates an exploitability score for the identified vulnerability of the software package based on the generated vector of word frequencies; para [0062] shows the computer updates an entry for the software package in a common vulnerabilities and exposures database based on a vulnerability score contained in the retrieved vulnerability information and the calculated exploitability score for the identified vulnerability of the software package). Regarding claim 7, Wyatt-Jevans-Mishra as applied to claim 6 discloses the quality score is determined based on a performance metric associated with the source code (Wyatt; para [0110] shows static analysis may also be used that involve looking at the code inside of a component; para [0170] shows actual code that is itself may be asking for inappropriate behavior. Mishra; para [0048] shows security-related software flaws, misconfigurations, product names, and impact metrics; para [0061] shows the computer, using a regression machine learning algorithm, calculates an exploitability score for the identified vulnerability of the software package based on the generated vector of word frequencies; para [0062] shows the computer updates an entry for the software package in a common vulnerabilities and exposures database based on a vulnerability score contained in the retrieved vulnerability information and the calculated exploitability score for the identified vulnerability of the software package). Regarding claim 8, Wyatt-Jevans-Mishra as applied to claim 6 discloses the quality score is determined based on a programming error identified within the source code (Wyatt; para [0110] shows static analysis may also be used that involve looking at the code inside of a component; para [0170] shows actual code that is itself may be asking for inappropriate behavior. Mishra; para [0048] shows security-related software flaws, misconfigurations, product names, and impact metrics; para [0061] shows the computer, using a regression machine learning algorithm, calculates an exploitability score for the identified vulnerability of the software package based on the generated vector of word frequencies; para [0062] shows the computer updates an entry for the software package in a common vulnerabilities and exposures database based on a vulnerability score contained in the retrieved vulnerability information and the calculated exploitability score for the identified vulnerability of the software package). Regarding claims 10-16, claims 10-16 are directed to a computer-readable medium. Claims 10-16 require limitations that are similar to those recited in the method claims 1-7 to carry out the method steps. And since the references of Wyatt-Jevans-Mishra combined teach the method including limitations required to carry out the method steps, therefore claims 10-16 would have also been obvious in view of the structures disclosed in Wyatt-Jevans-Mishra combined. Furthermore, Wyatt-Jevans-Mishra combined discloses computer-readable medium comprising program code that is executable by one or more processors for causing the one or more processors to perform operations (Wyatt; para [0011]). Regarding claims 17-20, claims 17-20 are directed to a system. Claims 17-20 require limitations that are similar to those recited in the method claims 1-4 to carry out the method steps. And since the references of Wyatt-Jevans-Mishra combined teach the method including limitations required to carry out the method steps, therefore claims 17-20 would have also been obvious in view of the structures disclosed in Wyatt-Jevans-Mishra combined. Furthermore, Wyatt-Jevans-Mishra combined discloses one or more processors; and one or more memories including program code that is executable by the one or more processors for causing the one or more processors to perform operations (Wyatt; para [0011]). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Wyatt in view of Jevans and Mishra, further in view of Simanavicius et al. (US20220368621A1). Regarding claim 9, Wyatt-Jevans-Mishra as applied to claim 1 fails to show the software package is an image file for deploying an application inside a container. However, Simanavicius discloses the software package is an image file for deploying an application inside a container ([Abstract] shows installing containerized applications; para [0193] shows a container image is a lightweight, standalone, executable package of software that includes everything needed to run an application.) It would have been obvious to one of ordinary skill in the at the time the invention was effectively filed to modify the method of Wyatt-Jevans- Mishra with the teaching of Simanavicius in order to install a lightweight, standalone, executable package of software that includes everything needed to run an application (Simanavicius; para [0193]). Citation of Relevant Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sethi et al. (US12135790B2) discloses in [Abstract] a method for detecting vulnerabilities of an installed application, comprising: obtaining information related to an application installed to a client device; sending, by an application monitoring agent, the information related to the application installed to the client device to a vulnerability validator; determining by the vulnerability validator, based on impact score information, whether a specific version of the application installed to the client device has vulnerabilities; sending the impact score information to a client device upgrade manager; and notifying, based on the impact score information, the client device when the application installed to the client device has vulnerabilities. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN DOAN whose telephone number is (571)270-0162. The examiner can normally be reached Monday - Friday 8am - 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oscar Louie, can be reached at (571) 270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN DOAN/Primary Examiner, Art Unit 2445
Read full office action

Prosecution Timeline

Jun 17, 2024
Application Filed
Feb 08, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592872
DETECTING AND VALIDATING ANOMALIES FROM ONGOING DATA COLLECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12591365
INPUT/OUTPUT FENCING OF A SHARED CLOUD STORAGE VOLUME
2y 5m to grant Granted Mar 31, 2026
Patent 12587476
Method and Apparatus for publishing an RT-5G routing message, Storage Medium and Electronic Apparatus
2y 5m to grant Granted Mar 24, 2026
Patent 12572438
QUANTUM COMPUTING MONITORING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12563035
METHOD AND SYSTEM FOR ACCESS AUTHORISATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
98%
With Interview (+25.4%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 311 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month