Prosecution Insights
Last updated: April 19, 2026
Application No. 18/324,940

SYSTEMS AND METHODS FOR DETECTING ACCESSIBILITY FAILURES

Non-Final OA §103
Filed
May 26, 2023
Examiner
LYONS, ANDREW M
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
90%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
338 granted / 459 resolved
+18.6% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
23 currently pending
Career history
482
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
57.3%
+17.3% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 459 resolved cases

Office Action

§103
DETAILED ACTION This Action is a response to the RCE filed 11 November 2025. Claims 1-3, 6, 9-11, 13-14 and 17-19 are amended; no claims are canceled or newly added. Claims 1-20 remain pending for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11 November 2025 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 6-11 and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Wiley, Evan, U.S. 10,839,039 B1 (“Wiley”) in view of Negussie et al., U.S. 2023/0088784 A1 (“Negussie”) and Bradley et al., U.S. 2020/0372204 A1 (“Bradley”), and in further view of Haze et al., U.S. 2021/0303447 A1 (“Haze”). Regarding claim 1, Wiley teaches: A system for evaluating accessibility of one or more software applications, the system comprising: one or more processors; and a non-transitory, computer readable medium comprising instructions that, when executed by the one or more processors, causes operations (Wiley, e.g., 3:59-4:15, “computing device 101 may include a processor … Memory 121 may store software for configuring computing device 101 … to perform one or more of the various functions …”) comprising: monitoring code … for a user interface of an updated version of a software application (Wiley, e.g., 6:61-7:7, “method 400 for determining compliance of a modified version of a webpage with accessibility rules … based on the first version … and the second version … method 400 may be implemented in any suitable computing environment by a computing device …” See also, e.g., 7:21-23, “first version of the webpage may be modified by modifying HTML code used to generate the first version of the webpage …” See also, e.g., 8:5-11, “textual representation of a voiceover of the first version of the webpage.” See also, e.g., 8:45-53, “steps of the method 400 may be automatically implemented … individual … may initiate the software after displaying the second version of the webpage …”); generating audio output using a screen reader configured to process textual or visual data of the user interface and the updated version of the software application (Wiley, e.g., 7:31-48, “voiceover of the second version of the webpage may be initiated … voiceover may operate as and/or provide similar outputs as a screen reader …”); generating textual data by processing the audio script of the user interface using speech recognition (Wiley, e.g., 7:61-65, “textual transcript of the stored recording of the voiceover of the second version of the webpage may be generated … by software that converts a recording into text …”); comparing the textual data to previous textual data, corresponding to a previous version of the software application to determine a plurality of feature differences present between the updated version of the software application and the previous version of the software application (Wiley, e.g., 8:5-26, “user interface may indicate one or more changes between the textual representation of the voiceover of the first version of the webpage and the textual transcript of the stored recording of the voiceover of the second version of the webpage …” See also, e.g., 8:54-9:8, “accessibility rules may require each data input field of a webpage to include an associated description that may be read aloud by a screen reader program … newly introduced data input field 310 may not be associated with a visual description that is displayed on the second version … may be associated with an embedded description … method 400 may confirm that the embedded … description is indeed provided and is read aloud … If not, the method 400 may enable a user to determine that a required description … is not provided, thereby necessitating modification of the underlying HTML code.”); … in the one or more feature differences, … wherein the accessibility failure point renders content in the user interface inaccessible to a visually impaired user (Wiley, e.g., 8:20-43, “compliance of the second version of the webpage may be determined by comparing the one or more changes to the one or more accessibility rules … to flag any changes that may not comply with the accessibility rules …” See also, e.g., 8:54-9:8, “accessibility rules may require each data input field of a webpage to include an associated description that may be read aloud by a screen reader program … newly introduced data input field 310 may not be associated with a visual description that is displayed on the second version … may be associated with an embedded description … method 400 may confirm that the embedded … description is indeed provided and is read aloud … If not, the method 400 may enable a user to determine that a required description … is not provided, thereby necessitating modification of the underlying HTML code.” Examiner’s note: the code segment of the code is the code providing the newly added data input field, which does not provide a compliant readable description (the accessibility failure point)). Wiley does not more particularly teach identifying that one or more feature differences are unintentional, and using a machine learning model trained on code identified with and without failures, identifying a code segment corresponding to a failure point in the feature differences based on the differences being unintentional. However, Negussie does teach: identifying that one or more feature differences of the plurality of feature differencesare unintentional design changes (Negussie, e.g., ¶12, “regression prediction platform that may use one or more machine learning models … to predict whether an impending code change is likely to cause code breakage … (e.g., where a software module or software component produces an incorrect or unexpected result or behaves in unintended ways … will cause functionality breakage (e.g., an incorrect or unexpected result or unintended behavior in a live production environment …”); … using a machine learning model, trained on code identified with [an accessibility] failure and code without failure, [to identify a code segment, of the code, corresponding to a failure point] in the one or more feature differences based on the one or more feature differences being unintentional (Negussie, e.g., ¶14, “regression prediction platform may receive information related to an impending code change … to modify existing code, add new code, and/or delete existing code in a code base …” See also, e.g., ¶19, “obtain one or more feature sets from one or more data repositories that include data relevant to potential code breakage … predict a probability that the impending code change will cause code breakage … using the one or more machine learning models based on the one or more feature sets …” See also, e.g., ¶21, “regression prediction platform may execute one or more automated tests … of the impending code change … designed to execute various functions to … test a user interface appearance, and/or test the effect of user interface elements …” See also, e.g., ¶24, “predict the probability … of code breakage … based on a QA history and/or development session data …” See also, e.g., ¶27, “predict the probability … that the impending code change will cause functionality breakage based on a degree to which the impending code change adheres to one or more functional and/or technical requirements …” and ¶28, “feature sets … provides to the one or more machine learning models may include root cause data … data related to past code changes that have caused functionality breakage and root cause data indicating the reasons why the past code changes caused the functionality breakage …” See also, e.g., ¶¶33-42, describing a ML model training process whereby input feature sets including a value of a target variable are used to train the model to predict the value of the target variable for future feature sets. The example is to predict “distracted” versus “focused” for a quality of a developer session. However, Negussie teaches that the model may be trained on code breakage and root cause data for code changes causing unintended behavioral changes, indicating that the model may be trained on feature sets including a target variable of breakage with a failure / no failure value to train the model or predict based on the trained model) for the purpose of efficiently and accurately predicting whether a particular change in code is likely to produce an unintended change in application behavior, and recommend mitigation or other actions based on a determination that a breakage is likely (Negussie, e.g., ¶¶12, 44-46). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for testing webpage accessibility compliance as taught by Wiley to provide for identifying that one or more feature differences are unintentional, and using a machine learning model trained on code identified with and without failures, identifying a code segment corresponding to a failure point in the feature differences based on the differences being unintentional because the disclosure of Negussie shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for identifying unintended application behaviors introduced by code changes to provide for identifying that one or more feature differences are unintentional, and using a machine learning model trained on code identified with and without failures, identifying a code segment corresponding to a failure point in the feature differences based on the differences being unintentional for the purpose of efficiently and accurately predicting whether a particular change in code is likely to produce an unintended change in application behavior, and recommend mitigation or other actions based on a determination that a breakage is likely (Negussie, Id.). Wiley in view of Negussie does not more particularly teach that in response to identifying the accessibility failure point, determining based on the first feature difference and the updated version of the software application an action for removing the failure point, and transmitting a command to a remote device to display the action. However, Bradley does teach: determining based on (1) the code segment and (2) the updated version of the software application, one or more actions for removing the accessibility failure point; and transmitting, to a remote device, providing a command for displaying, in the code editor, the one or more actions for removing the accessibility failure point (Bradley, e.g., ¶112, “machine learning system 604 may identify a set of issues (608) associated with a set of analyzed web page elements (604) … compare the identified issues with previously identified and resolved issues, and ascertain a previously applied remediation that may be applicable …” See also, e.g., ¶113, “user may then be able to browse through identified issues using a suitable user interface … view the suggested remediation code … accept the code as presented …” See also, e.g., ¶114, “system 604 may determine … that a particular image requires a modified ALT text …” See also, e.g., ¶116, “common templates … can be identified across multiple pages of a website, allowing the system to determine a set of remediations that can be applied to multiple pages …” and ¶117, “Templates may be identified … patterns in DOM structure … URL structure … accessibility test results … DOM 700 corresponding to a particular example web page having a variety of elements …” Examiner’s note: each element is associated with one or more segments of code, especially given that the remediations presented pertain to code modifications to resolve one or more identified accessibility issues) for the purpose of utilizing historical remediation information to apply improvements to accessibility features of web applications (Bradley, e.g., ¶¶110-114). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for testing webpage accessibility compliance as taught by Wiley in view of Negussie to provide that in response to identifying the accessibility failure point, determining based on the first feature difference and the updated version of the software application an action for removing the failure point, and transmitting a command to a remote device to display the action because the disclosure of Bradley shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for evaluating the accessibility of web application content to provide that in response to identifying the accessibility failure point, determining based on the first feature difference and the updated version of the software application an action for removing the failure point, and transmitting a command to a remote device to display the action for the purpose of utilizing historical remediation information to apply improvements to accessibility features of web applications (Bradley, Id.). Wiley in view of Negussie and Bradley does not more particularly teach monitoring code entered into a code editor, identifying a code segment of code corresponding to an issue, determining based on the segment and a modification to code one or more actions, and providing a command for displaying the actions in the code editor. However, Haze does teach: monitoring code, entered into a code editor, [for an updated version of a software application …] … [analyzing the one or more feature differences to identify] a code segment, of the code [corresponding to a … failure point …] … [determining, based on (1) the] code segment [and (2) the updated version of the software application, one or more actions] … providing [a command for displaying] in the code editor [the one or more actions] (Haze, e.g., ¶74, “authored code snippet is received … developer 108 inputs keystrokes to the IDE 108, the keystrokes are provided to the code recommendation system 106 as the authored code snippet …” See also, e.g., ¶75, “context of the authored code snippet is determined (408). It is determined whether a code recommendation is to be made (410) … A sub-set of code recommendations is provided … One or more code recommendations are displayed (416). An accept/decline or a code recommendation …” See also, e.g., ¶55, “presented recommendations 224 can be presented adjacent to code that the developer is authoring within the IDE.” See also, e.g., ¶43, “bug fix had been flagged for the previously authored code … code recommendation system can make a code recommendation to the developer … including a code snippet for addition of the security token …”) for the purpose of utilizing real-time code change information and additional context to identify one or more code-improving recommendations and presenting those recommendations to a developer in a streamlined manner (Haze, e.g., ¶¶19-22). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for testing webpage accessibility compliance as taught by Wiley in view of Negussie and Bradley to provide for monitoring code entered into a code editor, identifying a code segment of code corresponding to an issue, determining based on the segment and a modification to code one or more actions, and providing a command for displaying the actions in the code editor because the disclosure of Haze shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for monitoring and improving source code modifications to provide for monitoring code entered into a code editor, identifying a code segment of code corresponding to an issue, determining based on the segment and a modification to code one or more actions, and providing a command for displaying the actions in the code editor for the purpose of utilizing real-time code change information and additional context to identify one or more code-improving recommendations and presenting those recommendations to a developer in a streamlined manner (Haze, Id.). Regarding claim 2, the rejection of claim 1 is incorporated, and Negussie further teaches: wherein the instructions cause the one or more processors to perform operations comprising: receiving a training dataset comprising the machine learning model or a different machine learning model for identifying that the one or more feature differencesare unintentional (Negussie, e.g., ¶14, “regression prediction platform may receive information related to an impending code change … to modify existing code, add new code, and/or delete existing code in a code base …” See also, e.g., ¶19, “obtain one or more feature sets from one or more data repositories that include data relevant to potential code breakage … predict a probability that the impending code change will cause code breakage … using the one or more machine learning models based on the one or more feature sets …” See also, e.g., ¶21, “regression prediction platform may execute one or more automated tests … of the impending code change … designed to execute various functions to … test a user interface appearance, and/or test the effect of user interface elements …” See also, e.g., ¶24, “predict the probability … of code breakage … based on a QA history and/or development session data …” See also, e.g., ¶27, “predict the probability … that the impending code change will cause functionality breakage based on a degree to which the impending code change adheres to one or more functional and/or technical requirements …” and ¶28, “feature sets … provides to the one or more machine learning models may include root cause data … data related to past code changes that have caused functionality breakage and root cause data indicating the reasons why the past code changes caused the functionality breakage …” See also, e.g., ¶¶33-42, describing a ML model training process whereby input feature sets including a value of a target variable are used to train the model to predict the value of the target variable for future feature sets. The example is to predict “distracted” versus “focused” for a quality of a developer session. However, Negussie teaches that the model may be trained on code breakage and root cause data for code changes causing unintended behavioral changes, indicating that the model may be trained on feature sets including a target variable of breakage with a failure / no failure value to train the model or predict based on the trained model). Regarding claim 3, the rejection of claim 1 is incorporated, and Bradley further teaches: wherein determining one or more actions for removing the accessibility failure point comprises: processing (1) the one or more feature differences and (2) the updated version of the software application usinganother machine learning model for identifying actions for removing the accessibility failure point (Bradley, e.g., ¶112, “machine learning system 604 may identify a set of issues (608) associated with a set of analyzed web page elements (604) … compare the identified issues with previously identified and resolved issues, and ascertain a previously applied remediation that may be applicable …” See also, e.g., ¶113, “user may then be able to browse through identified issues using a suitable user interface … view the suggested remediation code … accept the code as presented …” See also, e.g., ¶114, “system 604 may determine … that a particular image requires a modified ALT text …”). Claims 6 and 7 are rejected for the reasons given in the rejection of claim 1 above. Examiner notes that with respect to claim 6, Wiley further teaches: A method (Wiley, e.g., 6:61-7:7, “method 400 for determining compliance of a modified version of a webpage with accessibility rules …”), the method comprising: [[[performing the operations of the system of claim 1, except for the transmitting operation]]]; and with respect to claim 7, Bradley teaches the transmitting limitation of claim 1. Regarding claim 8, the rejection of claim 6 is incorporated, and Bradley further teaches: wherein determining one or more actions for removing the accessibility failure point comprises retrieving, from a data structure, the one or more actions based on (1) the one or more feature differences and (2) the updated version of the software application (Bradley, e.g., ¶112, “machine learning system 604 may identify a set of issues (608) associated with a set of analyzed web page elements (604) … compare the identified issues with previously identified and resolved issues, and ascertain a previously applied remediation that may be applicable …” See also, e.g., ¶113, “user may then be able to browse through identified issues using a suitable user interface … view the suggested remediation code … accept the code as presented …” See also, e.g., ¶114, “system 604 may determine … that a particular image requires a modified ALT text …” Examiner’s note: Wiley identifies issues based on a comparison between two versions (a feature difference); Bradley is trained on fixing similar issues (i.e., a difference that introduces an ALT text error is remediated by modifying ALT text)). Claims 9-10 are rejected for the additional reasons given in the rejections of claims 2-3 above. Regarding claim 11, the rejection of claim 10 is incorporated, and Bradley further teaches: receiving a training dataset comprising (1) the one or more feature differences, (2) the updated version of the software application, and (3) corresponding actions that were performed to remove accessibility failure points; and training, based on the training dataset, thedifferent machine learning model for identifying actions for removing accessibility failure points (Bradley, e.g., ¶112, “machine learning system 604 may identify a set of issues (608) associated with a set of analyzed web page elements (604) … compare the identified issues with previously identified and resolved issues, and ascertain a previously applied remediation that may be applicable … Machine learning system 604 may then be able to recall, and possibly modify, existing remediations (614) such that they are tailored to address the specific instance identified by the scanning system.” See also, e.g., ¶113, “user may then be able to browse through identified issues using a suitable user interface … view the suggested remediation code … accept the code as presented …” See also, e.g., ¶114, “system 604 may determine … that a particular image requires a modified ALT text …” Examiner’s note: Wiley identifies issues based on a comparison between two versions (a feature difference); Bradley is trained on fixing similar issues (i.e., a difference that introduces an ALT text error is remediated by modifying ALT text)). Regarding claim 14, Wiley teaches: One or more non-transitory, computer readable media comprising instructions that, when executed by one or more processors, causes operations (Wiley, e.g., 3:59-4:15, “computing device 101 may include a processor … Memory 121 may store software for configuring computing device 101 … to perform one or more of the various functions …”) comprising: receiving code … , for a user interface of an updated version of a software application (Wiley, e.g., 6:61-7:7, “method 400 for determining compliance of a modified version of a webpage with accessibility rules … based on the first version … and the second version … method 400 may be implemented in any suitable computing environment by a computing device …” See also, e.g., 7:21-23, “first version of the webpage may be modified by modifying HTML code used to generate the first version of the webpage …” See also, e.g., 8:5-11, “textual representation of a voiceover of the first version of the webpage.” See also, e.g., 8:45-53, “steps of the method 400 may be automatically implemented … individual … may initiate the software after displaying the second version of the webpage …”); comparing textual data associated with the updated version of the software application to previous textual data, associated with a previous version of the software application, to determine a plurality of feature differences present between the updated version of the software application and the previous version of the software application (Wiley, e.g., 8:5-26, “user interface may indicate one or more changes between the textual representation of the voiceover of the first version of the webpage and the textual transcript of the stored recording of the voiceover of the second version of the webpage …” See also, e.g., 8:54-9:8, “accessibility rules may require each data input field of a webpage to include an associated description that may be read aloud by a screen reader program … newly introduced data input field 310 may not be associated with a visual description that is displayed on the second version … may be associated with an embedded description … method 400 may confirm that the embedded … description is indeed provided and is read aloud … If not, the method 400 may enable a user to determine that a required description … is not provided, thereby necessitating modification of the underlying HTML code.”); … and identify a code segment, of the code, corresponding to an accessibility failure point in the one or more feature differences, wherein the accessibility failure point renders content in the user interface inaccessible to a visually impaired user (Wiley, e.g., 8:20-43, “compliance of the second version of the webpage may be determined by comparing the one or more changes to the one or more accessibility rules … to flag any changes that may not comply with the accessibility rules …” See also, e.g., 8:54-9:8, “accessibility rules may require each data input field of a webpage to include an associated description that may be read aloud by a screen reader program … newly introduced data input field 310 may not be associated with a visual description that is displayed on the second version … may be associated with an embedded description … method 400 may confirm that the embedded … description is indeed provided and is read aloud … If not, the method 400 may enable a user to determine that a required description … is not provided, thereby necessitating modification of the underlying HTML code.” Examiner’s note: the code segment of the code is the code providing the newly added data input field, which does not provide a compliant readable description (the accessibility failure point)). Wiley does not more particularly teach identifying that one or more feature differences are unintentional, and using a machine learning model trained on code identified with and without failures, identifying a code segment corresponding to a failure point in the feature differences based on the differences being unintentional. However, Negussie does teach: one or more machine learningmodels, including a machine learning model trained on code identified with an [] failure and code without failure, to identify one or more feature differences, of the plurality of feature differencesthat are unintentionaland identify a code segment, of the code, corresponding to an [] failure point in the one or more feature differences, (Negussie, e.g., ¶12, “regression prediction platform that may use one or more machine learning models … to predict whether an impending code change is likely to cause code breakage … (e.g., where a software module or software component produces an incorrect or unexpected result or behaves in unintended ways … will cause functionality breakage (e.g., an incorrect or unexpected result or unintended behavior in a live production environment …” See also, e.g., ¶14, “regression prediction platform may receive information related to an impending code change … to modify existing code, add new code, and/or delete existing code in a code base …” See also, e.g., ¶19, “obtain one or more feature sets from one or more data repositories that include data relevant to potential code breakage … predict a probability that the impending code change will cause code breakage … using the one or more machine learning models based on the one or more feature sets …” See also, e.g., ¶21, “regression prediction platform may execute one or more automated tests … of the impending code change … designed to execute various functions to … test a user interface appearance, and/or test the effect of user interface elements …” See also, e.g., ¶24, “predict the probability … of code breakage … based on a QA history and/or development session data …” See also, e.g., ¶27, “predict the probability … that the impending code change will cause functionality breakage based on a degree to which the impending code change adheres to one or more functional and/or technical requirements …” and ¶28, “feature sets … provides to the one or more machine learning models may include root cause data … data related to past code changes that have caused functionality breakage and root cause data indicating the reasons why the past code changes caused the functionality breakage …” See also, e.g., ¶¶33-42, describing a ML model training process whereby input feature sets including a value of a target variable are used to train the model to predict the value of the target variable for future feature sets. The example is to predict “distracted” versus “focused” for a quality of a developer session. However, Negussie teaches that the model may be trained on code breakage and root cause data for code changes causing unintended behavioral changes, indicating that the model may be trained on feature sets including a target variable of breakage with a failure / no failure value to train the model or predict based on the trained model) for the purpose of efficiently and accurately predicting whether a particular change in code is likely to produce an unintended change in application behavior, and recommend mitigation or other actions based on a determination that a breakage is likely (Negussie, e.g., ¶¶12, 44-46). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for testing webpage accessibility compliance as taught by Wiley to provide for identifying that one or more feature differences are unintentional, and using a machine learning model trained on code identified with and without failures, identifying a code segment corresponding to a failure point in the feature differences based on the differences being unintentional because the disclosure of Negussie shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for identifying unintended application behaviors introduced by code changes to provide for identifying that one or more feature differences are unintentional, and using a machine learning model trained on code identified with and without failures, identifying a code segment corresponding to a failure point in the feature differences based on the differences being unintentional for the purpose of efficiently and accurately predicting whether a particular change in code is likely to produce an unintended change in application behavior, and recommend mitigation or other actions based on a determination that a breakage is likely (Negussie, Id.). Wiley in view of Negussie does not more particularly teach determining one or more actions for removing the accessibility failure point in response to identifying the failure point. However, Bradley does teach: providing, in the code editor, information based on identifying the code segment corresponding to the accessibility failure point (Bradley, e.g., ¶112, “machine learning system 604 may identify a set of issues (608) associated with a set of analyzed web page elements (604) … compare the identified issues with previously identified and resolved issues, and ascertain a previously applied remediation that may be applicable …” See also, e.g., ¶113, “user may then be able to browse through identified issues using a suitable user interface … view the suggested remediation code … accept the code as presented …” See also, e.g., ¶114, “system 604 may determine … that a particular image requires a modified ALT text …” See also, e.g., ¶116, “common templates … can be identified across multiple pages of a website, allowing the system to determine a set of remediations that can be applied to multiple pages …” and ¶117, “Templates may be identified … patterns in DOM structure … URL structure … accessibility test results … DOM 700 corresponding to a particular example web page having a variety of elements …” Examiner’s note: each element is associated with one or more segments of code, especially given that the remediations presented pertain to code modifications to resolve one or more identified accessibility issues) for the purpose of utilizing historical remediation information to apply improvements to accessibility features of web applications (Bradley, e.g., ¶¶110-114). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for testing webpage accessibility compliance as taught by Wiley in view of Negussie to provide that in response to identifying the accessibility failure point, determining based on the first feature difference and the updated version of the software application an action for removing the failure point, and transmitting a command to a remote device to display the action because the disclosure of Bradley shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for evaluating the accessibility of web application content to provide that in response to identifying the accessibility failure point, determining based on the first feature difference and the updated version of the software application an action for removing the failure point, and transmitting a command to a remote device to display the action for the purpose of utilizing historical remediation information to apply improvements to accessibility features of web applications (Bradley, Id.). Wiley in view of Negussie and Bradley does not more particularly teach receiving code entered into a code editor, identifying a code segment of code corresponding to an issue, determining based on the segment and a modification to code one or more actions, and providing a in the code editor information based on identifying the segment. However, Haze does teach: receiving code, entered into a code editor, [for an updated version of a software application …] … [analyzing the one or more feature differences to identify] a code segment, of the code [corresponding to a … failure point …] … [determining, based on (1) the] code segment [and (2) the updated version of the software application, one or more actions] … providing, in the code editor [information (the one or more actions) based on identifying the code segment corresponding to the failure point] (Haze, e.g., ¶74, “authored code snippet is received … developer 108 inputs keystrokes to the IDE 108, the keystrokes are provided to the code recommendation system 106 as the authored code snippet …” See also, e.g., ¶75, “context of the authored code snippet is determined (408). It is determined whether a code recommendation is to be made (410) … A sub-set of code recommendations is provided … One or more code recommendations are displayed (416). An accept/decline or a code recommendation …” See also, e.g., ¶55, “presented recommendations 224 can be presented adjacent to code that the developer is authoring within the IDE.” See also, e.g., ¶43, “bug fix had been flagged for the previously authored code … code recommendation system can make a code recommendation to the developer … including a code snippet for addition of the security token …”) for the purpose of utilizing real-time code change information and additional context to identify one or more code-improving recommendations and presenting those recommendations to a developer in a streamlined manner (Haze, e.g., ¶¶19-22). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for testing webpage accessibility compliance as taught by Wiley in view of Negussie and Bradley to provide for receiving code entered into a code editor, identifying a code segment of code corresponding to an issue, determining based on the segment and a modification to code one or more actions, and providing a in the code editor information based on identifying the segment because the disclosure of Haze shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for monitoring and improving source code modifications to provide for receiving code entered into a code editor, identifying a code segment of code corresponding to an issue, determining based on the segment and a modification to code one or more actions, and providing a in the code editor information based on identifying the segment for the purpose of utilizing real-time code change information and additional context to identify one or more code-improving recommendations and presenting those recommendations to a developer in a streamlined manner (Haze, Id.). Regarding claim 15, the rejection of claim 14 is incorporated, and Wiley further teaches: generating audio output using a screen reader configured to process textual or visual data of the user interface and the updated version of the software application (Wiley, e.g., 7:31-48, “voiceover of the second version of the webpage may be initiated … voiceover may operate as and/or provide similar outputs as a screen reader …”); and generating the textual data by processing the audio output of the user interface using speech recognition (Wiley, e.g., 7:61-65, “textual transcript of the stored recording of the voiceover of the second version of the webpage may be generated … by software that converts a recording into text …”). Claim 16 is rejected for the additional reasons given in the rejection of claim 8 above. Claims 17-18 are rejected for the additional reasons given in the rejections of claims 2-3 above. Claim 19 is rejected for the additional reasons given in the rejection of claim 11 above. Claims 4, 12 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wiley and Negussie in view of Bradley and Haze, and in further view of Yee, David, U.S. 6,889,337 B1 (“Yee”). Regarding claim 4, the rejection of claim 1 is incorporated, but Wiley and Negussie in view of Bradley and Haze do not more particularly teach that the textual data comprises a string of text and one or more timestamps and wherein generating the textual data comprises generating for each segment of the text string a corresponding timestamp of a time each segment was textualized. However, Yee does teach: wherein the textual data comprises (1) a string of text and (2) one or more timestamps and wherein generating the textual data comprises: generating, for each of a plurality of segments of the string of text, a corresponding timestamp representing a time at which each segment was textualized (Yee, e.g., 6:15-22, “recording functionality allows portions of resulting screen reader outputs 125 to be marked as variable for those portions of output that are expected to differ on each test run (e.g., such as timestamps, etc.) …”) for the purpose of performing screen reader output testing and regression testing based on information regarding recorded outputs (Yee, e.g., 7:1-16). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for testing webpage accessibility compliance as taught by Wiley and Negussie in view of Bradley and Haze to provide that the textual data comprises a string of text and one or more timestamps and wherein generating the textual data comprises generating for each segment of the text string a corresponding timestamp of a time each segment was textualized because the disclosure of Yee shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for performing screen reader regression testing to provide that the textual data comprises a string of text and one or more timestamps and wherein generating the textual data comprises generating for each segment of the text string a corresponding timestamp of a time each segment was textualized for the purpose of performing screen reader output testing and regression testing based on information regarding recorded outputs (Yee, Id.). Claims 12 and 20 are rejected for the additional reasons given in the rejection of claim 4 above. Claims 5 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Wiley and Negussie in view of Bradley and Haze, and in further view of George et al., U.S. 10,459,835 B1 (“George”). Regarding claim 5, the rejection of claim 1 is incorporated, and Bradley further teaches: automatically performing the one or more actions for removing the accessibility failure point (Bradley, e.g., ¶114, “determine a confidence level associated with each remediation that it generates or suggests … a particular image requires a modified ALT text … once the confidence level is greater than a predetermined threshold … the system may be configured to automatically apply remediations …”). Wiley and Negussie in view of Bradley and Haze do not more particularly teach transmitting a notification of successful action completion to the remote device. However, George does teach: transmitting, to a remote device, a notification of successful completion of the one or more actions (George, e.g., 10:15-22, “analytics and remediation 322 may be provided automatically to reporting 352, and a unified dashboard 324 may be used to provide the user with unified reports that … provide recommendations for remediations … (and/or provide notification that immediate fixes have been automatically performed”) for the purpose of providing a variety of testing information to a user, including indications of any automatically applied performance issue remediations (George, e.g., 10:1-22). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method for testing webpage accessibility compliance as taught by Wiley and Negussie in view of Bradley and Haze to provide for transmitting a notification of successful action completion to the remote device because the disclosure of George shows that it was known to those of ordinary skill in the pertinent art to improve a system and method for performing optimizing testing of digital applications to provide for transmitting a notification of successful action completion to the remote device for the purpose of providing a variety of testing information to a user, including indications of any automatically applied performance issue remediations (George, Id.). Claim 13 is rejected for the additional reasons given in the rejection of claim 5 above. Response to Arguments In the Remarks, Applicant Argues: The prior art, alone or in combination, fails to teach or disclose one or more features of the amended independent claims, and the claims are accordingly in condition for allowance (Resp. at 11-12). Examiner’s Response: In view of the amendments, Examiner newly cites to Negussie, and maintains the rejections under the new grounds set forth in full above. Conclusion Examiner has identified particular references contained in the prior art of record within the body of this action for the convenience of Applicant. Although the citations made are representative of the teachings in the art and are applied to the specific limitations within the enumerated claims, the teaching of the cited art as a whole is not limited to the cited passages. Other passages and figures may apply. Applicant, in preparing the response, should consider fully the entire reference as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art and/or disclosed by Examiner. Examiner respectfully requests that, in response to this Office Action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application. When responding to this Office Action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 C.F.R. 1.111(c). Examiner interviews are available via telephone and video conferencing using a USPTO-supplied web-based collaboration tool. Applicant is encouraged to submit an Automated Interview Request (AIR) which may be done via https://www.uspto.gov/patent/uspto-automated-interview-request-air-form, or may contact Examiner directly via the methods below. Any inquiry concerning this communication or earlier communication from Examiner should be directed to Andrew M. Lyons, whose telephone number is (571) 270-3529, and whose fax number is (571) 270-4529. The examiner can normally be reached Monday to Friday from 10:00 AM to 6:00 PM ET. If attempts to reach Examiner by telephone are unsuccessful, Examiner’s supervisor, Wei Mui, can be reached at (571) 272-3708. Information regarding the status of an application may be obtained from the Patent Center system. For more information about the Patent Center system, see https://www.uspto.gov/patents/apply/patent-center. If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786-9199 (in USA or Canada) or (571) 272-1000. /Andrew M. Lyons/Primary Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

May 26, 2023
Application Filed
Mar 22, 2025
Non-Final Rejection — §103
Jun 11, 2025
Interview Requested
Jun 16, 2025
Examiner Interview Summary
Jun 16, 2025
Applicant Interview (Telephonic)
Jun 18, 2025
Response Filed
Sep 23, 2025
Final Rejection — §103
Nov 03, 2025
Applicant Interview (Telephonic)
Nov 11, 2025
Request for Continued Examination
Nov 12, 2025
Examiner Interview Summary
Nov 17, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §103
Mar 25, 2026
Examiner Interview Summary
Mar 25, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602311
METHOD, DEVICE, SYSTEM, AND COMPUTER PROGRAM FOR COVERAGE-GUIDED SOFTWARE FUZZING
2y 5m to grant Granted Apr 14, 2026
Patent 12602203
INTEGRATION FLOW DESIGN GUIDELINES VALIDATOR
2y 5m to grant Granted Apr 14, 2026
Patent 12596542
GENERATING AND DISTRIBUTING CUSTOMIZED EMBEDDED OPERATING SYSTEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12585465
DYNAMIC PROJECT PLANNING FOR SOFTWARE DEVELOPMENT PROJECTS
2y 5m to grant Granted Mar 24, 2026
Patent 12585453
SYSTEMS AND METHODS FOR UPDATING WITNESS SLED FIRMWARE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
90%
With Interview (+16.1%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 459 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month