Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Application Status
Present office action is in response to application filed 09/11/2023. Claims 1-20 are currently pending in the application.
Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claims 1-2, 4, 6-12, 14 and 16-17 are rejected under 35 U.S.C. 103 as obvious over O'Reilly et al. (US 20210065584 A1) (O'Reilly) in view of Jolley et al. (US 20170242899 A1) (Jolley).
Re claims 1 and 11:
[Claim 1] O'Reilly teaches or at least suggests a method comprising: training, by a server and based on browser interaction data, a first machine learning model to predict a visually impaired spectrum score; generating, by an extension implementing the first trained machine learning model, a first visually impaired spectrum score associated with a first user (at least ¶ 6: the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like; ¶ 30: Systems and methods for providing accessibility solutions for users with visual impairments … the system may be or comprise one or more application servers, web servers and/or software applications … a browser extension or plug-in configured to scan webpages and insert code to modify webpages received from the web server …; ¶ 50: provide customized accessibility solutions for the visual impairments of individuals that can be dynamically updated and work with any interface (e.g., browser); ¶ 64: machine-learning driven changes based on user interaction events to be used to update the program code entries for a given user; ¶ 88: the one or more machine learning models may generate one prediction for a preferred font, one prediction for a preferred font size, one prediction for a preferred zoom level, one prediction for a preferred contrast setting, and/or the like. Thus, the output may be a predicted preferred display/presentation preferences/settings/options and corresponding confidence scores; ¶ 90: the predicted output (e.g., generated prediction) of the one or more machine learning models for a given user may be a plurality of predictions and corresponding confidence scores); adjusting, by the extension and based on the first visually impaired spectrum score, one or more accessibility settings of a browser executing the extension (at least ¶ 30: An example system extension may modify the display configurations on a user computing device. The system may be enabled/disabled by the user or automatically activated by certain conditions, for instance when an application is initialized. The system may modify display configurations for a particular application or make universal modifications on the user computing device. The exemplary system may comprise a browser extension or plug-in configured to scan webpages and insert code to modify webpages received from the web server; ¶ 70: The internet browser extension may periodically, continuously, or in response to certain triggers (e.g., user interaction events) update the program code entries by using an application programming interface (API) to generate and transmit a request to the application server 65 for the current program code entries corresponding to the program code identifier; ¶ 93: … if the font type prediction of serif has a confidence score of 0.86, the application server 65 would then identify the program code entry to adjust a particular font type display/presentation preference/setting/option to serif); receiving, by the trained machine learning model, feedback from the first user interaction; adjusting, by the extension and based on the feedback from the first user interaction, at least one accessibility setting of the one or more accessibility settings; storing the at least one adjusted accessibility setting; and causing, by the browser and using the at least one adjusted accessibility setting, presentation of a readable document on the browser (at least ¶ 30: … the system may be profile based, generating and maintaining a user profile comprising current visual impairment information. User profiles comprising visual impairment information/data may be stored on the user computing device, application server or another analytic computing entity in communication with the system; ¶ 49: identify/determine the visual impairments of an individual and dynamically modify a display presentation (such as a user interface) with accessibility solutions corresponding to the determined visual impairments … A user with low visual acuity viewing text on a user interface, for instance, may require modifications to text size or text spacing. A user with poor perception of contrast viewing images on a user interface may require modifications to the background color to improve the contrast ratio. A user with color blindness (e.g., tritanopia, deuteranopia, protanopia or monochromacy) viewing images on a user interface may require modifications to the colors of text and/or images. As will be recognized, the disclosed approaches can be adapted to a variety of needs and circumstances; ¶ 50: provide customized accessibility solutions for the visual impairments of individuals that can be dynamically updated and work with any interface (e.g., browser); ¶ 64: … machine-learning driven changes based on user interaction events to be used to update the program code entries for a given user; ¶ 67: program code entries may be automatically updated using external sources, may be modified based at least in part on user input, may be modified based at least in part on detected interaction using artificial intelligence, and/or the like; ¶ 70: At step/operation 813 in FIG. 8B, the user computing device 30 may store the determined visual impairments, the program code identifier, and/or the program code entries in association with the internet browser extension (or other application) and/or association with the user profile …; ¶ 88: the application server 65 formats the user interaction features, for example, into a multidimensional vector for input into the one or more machine learning models … ); ¶ 89: a variety of machine learning libraries and algorithms can be used … Extreme Learning Machines (ELM), k-nearest neighbor, Naive Bayes, decision trees, support vector machines, and/or various other machine learning techniques can be used to adapt to different needs and circumstances. In one embodiment, the machine learning models (e.g., multi-class classification models) may be pluggable machine learning models; ¶ 90: the predicted output (e.g., generated prediction) of the one or more machine learning models for a given user may be a plurality of predictions and corresponding confidence scores; ¶ 93: … if the font type prediction of serif has a confidence score of 0.86, the application server 65 would then identify the program code entry to adjust a particular font type display/presentation preference/setting/option to serif).
At least in view of the fact that O'Reilly discloses “identifying visual impairments of an individual, identifying corresponding accessibility solutions for the visual impairments and modifying display presentations (e.g., user interfaces) to accommodate for the visual impairments” (¶ 2), “approaches … adapted to a variety of needs and circumstances” (¶ 49), “one or more machine learning models may generate one prediction for a preferred font, one prediction for a preferred font size, one prediction for a preferred zoom level, one prediction for a preferred contrast setting, and/or the like” (¶ 88) and “a variety of machine learning libraries and algorithms” (¶ 89), modifying O'Reilly in view of Jolley to use a first and second machine learning models as claimed would have been obvious since it has been held to be within the general skill of a worker in the art to make singular part as plural parts as a matter of obvious engineering choice. Nerwin v. Erlichman, 168 USPQ 177, 179 (PTO Bd. Of Int. 1969).
O'Reilly appears to be silent on feedback from the first user regarding an adjustment to the one or more accessibility settings; adjusting, by the extension and based on the feedback from the first user, at least one accessibility setting of the one or more accessibility settings. However, the concept and advantages of computer systems eliciting and obtaining feedback, for example, by user approval/disapproval of (a) step(s)/result(s) or proposal of next step(s)/result(s) for clarification and/or resolution of ambiguities were old and well known to one of ordinary skill in the art before the effective filing date of the invention as evident in Jolley (at least ¶ 34: assisting the user to iteratively improve the question until the user finds exactly the answer intended …; ¶¶ 197, 199, 200: … the system may be configured to ask for assistance from the user; ¶ 313: User inputs to the intelligent agent (212) via text, voice, touch gestures and/or other inputs, act as commands to the system to start a new query, modify the exiting query, or to take final action and/or approve the result. Intelligent agent (212) responds to a user (202) in various ways to elicit further feedback from the user, propose possible results, and to suggest possible next steps. In this way, the user (202) engages intelligent agent (212) in a back and forth to build and modify their query until they approve the query and/or start over; ¶ 318: the user (202) may ask to save the result to a wishlist; ¶ 457: an additional step (not shown in FIG. 8C) is performed of rendering a result for a most probable referent but provide opportunity for clarification. In one embodiment, an additional step (not shown in FIG. 8C) is performed of resolving the ambiguity at least in part by using a machine originated query). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have utilized the user feedback feature of Jolley to modify O'Reilly as claimed because this would amount to no more than applying known techniques to a known method (device, or product) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
[Claim 11] The claim recites computing device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the computing device to perform steps comparable to those of representative claim 1. Accordingly, independent claim 11 is rejected for reasons similar to those previously explained when addressing representative claim 1.
Re claims 2, 6-10, 12, 14 and 17:
[Claim 2] O'Reilly in view of Jolley teaches or at least suggests wherein the second trained machine learning model comprises one or more speech recognition models (at least O'Reilly: ¶ 45: The user input interface can comprise any of a number of devices allowing the user computing device 30 to receive data, such as … voice/speech; ¶ 57: user may respond…when prompted or selecting … via the user interface). it would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified the “one or more machine learning models” of O'Reilly (¶ 88) as claimed, since it has been held that mere duplication of essential working parts of a device involves only routine skill in the art. St Regis Paper Co. V. Bemis Co., 193 USPQ 8.
[Claims 4 and 14] O'Reilly in view of Jolley teaches or at least suggests automatically performing, based on detecting the first user interacting with the readable document, text-to-speech conversion of the readable document based on the one or more adjusted accessibility settings (at least O'Reilly: ¶ 45: The user input interface can comprise any of a number of devices allowing the user computing device 30 to receive data, such as … voice/speech; ¶ 57: user may respond…when prompted or selecting … via the user interface; Jolley: ¶ 101: the user input … may be acquired from a text interface or transliterated from voice to text; ¶ 313: User inputs to the intelligent agent (212) via voice).
[Claims 6-7, 10 and 12] O'Reilly in view of Jolley teaches or at least suggests determining, by the server and based on the first visually impaired spectrum score, one or more additional accessibility settings associated with the first visually impaired spectrum score; and adjusting, by the extension, the one or more additional accessibility settings of the browse, ([Claims 7 and 12]) wherein the one or more additional accessibility settings comprises one or more of: a font size; a font color; a font selection; a font spacing; a background color; a foreground color; a background pattern; a foreground pattern; a document lighting characteristic; a spotlight illumination characteristic; a magnification level; an animation characteristic; a transparency percentage; or a tactile feedback setting; ([Claim 10]) automatically performing, based on detecting the first user interacting with the readable document, the one or more adjusted additional accessibility settings (at least O'Reilly: ¶ 49: A user with low visual acuity viewing text on a user interface, for instance, may require modifications to text size or text spacing. A user with poor perception of contrast viewing images on a user interface may require modifications to the background color to improve the contrast ratio. A user with color blindness (e.g., tritanopia, deuteranopia, protanopia or monochromacy) viewing images on a user interface may require modifications to the colors of text and/or images; ¶ 64: program code identifiers (elements 520A, 520B) may be used to map to multiple program code entries … a first program code entry that changes red pixels to green pixels and a second program code entry that changes font sizes smaller than 7 to 7. This allows the use of multiple changes for a given user. Moreover, the complex program code identifier may be a string that is unique to the user and identifies multiple program code entries (instead of just being unique to the program code entries; ¶ 67: The program code entries may be automatically updated using external sources, may be modified based at least in part on user input, may be modified based at least in part on detected interaction using artificial intelligence, and/or the like; ¶ 88: the one or more machine learning models may generate one prediction for a preferred font, one prediction for a preferred font size, one prediction for a preferred zoom level, one prediction for a preferred contrast setting, and/or the like. Thus, the output may be a predicted preferred display/presentation preferences/settings/options and corresponding confidence scores; ¶ 90: the predicted output (e.g., generated prediction) of the one or more machine learning models for a given user may be a plurality of predictions and corresponding confidence scores. Exemplary outputs for a user are provided below for a predicted preferred font face/type and a predicted preferred font size; ¶ 93: At step/operation 1112 of FIG. 11B, for each predicted output that satisfies the configurable prediction threshold, the application server 65 can identify the corresponding code entries for the predicted output. For instance, if the font type prediction of serif has a confidence score of 0.86, the application server 65 would then identify the program code entry to adjust a particular font type display/presentation preference/setting/option to serif. This may be a default setting for an entire display, a setting to change arial to serif, to change font sizes in 18 to serif, and/or the like …).
[Claims 8-9] O'Reilly in view of Jolley teaches or at least suggests receiving additional feedback from the first user regarding the adjustment to the one or more additional accessibility settings; storing, based on the additional feedback, the one or more adjusted additional accessibility settings; and causing, by the extension and based on the additional feedback, presentation of the readable document on the browser using the one or more adjusted additional accessibility settings; causing, by the server based on the additional feedback and the first visually impaired spectrum score, a notification to be displayed to the first user reflecting a change in the first visually impaired spectrum score (at least Jolley: ¶ 34: assisting the user to iteratively improve the question until the user finds exactly the answer intended …; ¶ 313: User inputs to the intelligent agent (212) via text, voice, touch gestures and/or other inputs, act as commands to the system to start a new query, modify the exiting query, or to take final action and/or approve the result. Intelligent agent (212) responds to a user (202) in various ways to elicit further feedback from the user, propose possible results, and to suggest possible next steps. In this way, the user (202) engages intelligent agent (212) in a back and forth to build and modify their query until they approve the query and/or start over; ¶ 318: the user (202) may ask to save the result to a wishlist; ¶ 457: an additional step (not shown in FIG. 8C) is performed of rendering a result for a most probable referent but provide opportunity for clarification. In one embodiment, an additional step (not shown in FIG. 8C) is performed of resolving the ambiguity at least in part by using a machine originated query). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have iteratively used the user feedback feature of Jolley to further modify O'Reilly in view of Jolley as claimed because this would amount to no more than applying known techniques to a known method (device, or product) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
[Claim 16] O'Reilly in view of Jolley teaches or at least suggests wherein the instructions, when executed by the one or more processors comprises cause the computing device to: determine, based on the first visually impaired spectrum score, one or more additional accessibility settings associated with the first visually impaired spectrum score; and prompt, based on determining one or more reading aids, the first user to enable the one or more reading aids (at least O'Reilly: ¶ 49: visual impairments requiring modifications to display presentations for end users include low visual acuity, poor perception of contrast, color blindness, stereopsis, diplopia, and/or the like. A user with low visual acuity viewing text on a user interface, for instance, may require modifications to text size or text spacing. A user with poor perception of contrast viewing images on a user interface may require modifications to the background color to improve the contrast ratio. A user with color blindness (e.g., tritanopia, deuteranopia, protanopia or monochromacy) viewing images on a user interface may require modifications to the colors of text and/or images; ¶ 76: displaying the content based on presentation/styling information/data defined in CSS (e.g., font size, color, location on the screen and the like)).
[Claim 17] O'Reilly in view of Jolley teaches or at least suggests wherein the feedback comprises one or more of: verbal user feedback; or a response to a displayed prompt (at least O'Reilly: ¶ 45: The user input interface can comprise any of a number of devices allowing the user computing device 30 to receive data, such as … voice/speech; ¶ 57: user may respond…when prompted or selecting … via the user interface; Jolley: ¶ 34: assisting the user to iteratively improve the question until the user finds exactly the answer intended …; ¶ 101: the user input … may be acquired from a text interface or transliterated from voice to text; ¶ 313: User inputs to the intelligent agent (212) via text, voice, touch gestures and/or other inputs).
Claims 18-20 are rejected under 35 U.S.C. 103 as obvious over O'Reilly in view of Jolley and Muthukesavaraj (US 20210216699 A1) (Muthukesavaraj). Claims 3, 5, 13 and 15 are also rejected under this heading for concision.
Re claims 18-20:
[Claim 18] O'Reilly teaches or at least suggests one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause a computing device to: train, based on browser interaction data, a first machine learning model to predict a visually impaired spectrum score; generate, by an extension implementing the first trained machine learning model, a first visually impaired spectrum score associated with a first user (at least ¶ 6: the present invention provide methods, apparatus, systems, computing devices, computing entities, and/or the like; ¶ 30: Systems and methods for providing accessibility solutions for users with visual impairments … the system may be or comprise one or more application servers, web servers and/or software applications … a browser extension or plug-in configured to scan webpages and insert code to modify webpages received from the web server …; ¶ 50: provide customized accessibility solutions for the visual impairments of individuals that can be dynamically updated and work with any interface (e.g., browser); ¶ 64: machine-learning driven changes based on user interaction events to be used to update the program code entries for a given user; ¶ 88: the one or more machine learning models may generate one prediction for a preferred font, one prediction for a preferred font size, one prediction for a preferred zoom level, one prediction for a preferred contrast setting, and/or the like. Thus, the output may be a predicted preferred display/presentation preferences/settings/options and corresponding confidence scores; ¶ 90: the predicted output (e.g., generated prediction) of the one or more machine learning models for a given user may be a plurality of predictions and corresponding confidence scores); adjust, by the extension and based on the first visually impaired spectrum score, one or more accessibility settings of a browser executing the extension; receive, by the trained machine learning model, feedback from the first user regarding an adjustment to the one or more accessibility settings; adjust, by the extension and based on feedback from the first user interaction; store the at least one adjusted accessibility setting; and cause, by the browser and using the at least one adjusted accessibility setting, presentation of a readable document on the browser (at least ¶ 30: … the system may be profile based, generating and maintaining a user profile comprising current visual impairment information. User profiles comprising visual impairment information/data may be stored on the user computing device, application server or another analytic computing entity in communication with the system; ¶ 49: identify/determine the visual impairments of an individual and dynamically modify a display presentation (such as a user interface) with accessibility solutions corresponding to the determined visual impairments; ¶ 50: provide customized accessibility solutions for the visual impairments of individuals that can be dynamically updated and work with any interface (e.g., browser); ¶ 64: … machine-learning driven changes based on user interaction events to be used to update the program code entries for a given user; ¶ 67: program code entries may be automatically updated using external sources, may be modified based at least in part on user input, may be modified based at least in part on detected interaction using artificial intelligence, and/or the like; ¶ 70: At step/operation 813 in FIG. 8B, the user computing device 30 may store the determined visual impairments, the program code identifier, and/or the program code entries in association with the internet browser extension (or other application) and/or association with the user profile …; ¶ 88: the application server 65 formats the user interaction features, for example, into a multidimensional vector for input into the one or more machine learning models … ); ¶ 89: a variety of machine learning libraries and algorithms can be used … Extreme Learning Machines (ELM), k-nearest neighbor, Naive Bayes, decision trees, support vector machines, and/or various other machine learning techniques can be used to adapt to different needs and circumstances. In one embodiment, the machine learning models (e.g., multi-class classification models) may be pluggable machine learning models; ¶ 90: the predicted output (e.g., generated prediction) of the one or more machine learning models for a given user may be a plurality of predictions and corresponding confidence scores; ¶ 93: … if the font type prediction of serif has a confidence score of 0.86, the application server 65 would then identify the program code entry to adjust a particular font type display/presentation preference/setting/option to serif).
At least in view of the fact that O'Reilly discloses “identifying visual impairments of an individual, identifying corresponding accessibility solutions for the visual impairments and modifying display presentations (e.g., user interfaces) to accommodate for the visual impairments” (¶ 2), “approaches … adapted to a variety of needs and circumstances” (¶ 49), “one or more machine learning models may generate one prediction for a preferred font, one prediction for a preferred font size, one prediction for a preferred zoom level, one prediction for a preferred contrast setting, and/or the like” (¶ 88) and “a variety of machine learning libraries and algorithms” (¶ 89), modifying O'Reilly to use a first and second machine learning models as claimed would have been obvious since it has been held to be within the general skill of a worker in the art to make singular part as plural parts as a matter of obvious engineering choice. Nerwin v. Erlichman, 168 USPQ 177, 179 (PTO Bd. Of Int. 1969).
O'Reilly appears to be silent on feedback from the first user regarding an adjustment to the one or more accessibility settings; adjust, by the extension and based on feedback from the first user, at least one accessibility setting of the one or more accessibility settings. However, the concept and advantages of computer systems eliciting and obtaining feedback, for example, by user approval/disapproval of (a) step(s)/result(s) or proposal of next step(s)/result(s) for clarification and/or resolution of ambiguities were old and well known to one of ordinary skill in the art before the effective filing date of the invention as evident in Jolley (at least ¶ 34: assisting the user to iteratively improve the question until the user finds exactly the answer intended …; ¶¶ 197, 199, 200: … the system may be configured to ask for assistance from the user; ¶ 313: User inputs to the intelligent agent (212) via text, voice, touch gestures and/or other inputs, act as commands to the system to start a new query, modify the exiting query, or to take final action and/or approve the result. Intelligent agent (212) responds to a user (202) in various ways to elicit further feedback from the user, propose possible results, and to suggest possible next steps. In this way, the user (202) engages intelligent agent (212) in a back and forth to build and modify their query until they approve the query and/or start over; ¶ 318: the user (202) may ask to save the result to a wishlist; ¶ 457: an additional step (not shown in FIG. 8C) is performed of rendering a result for a most probable referent but provide opportunity for clarification. In one embodiment, an additional step (not shown in FIG. 8C) is performed of resolving the ambiguity at least in part by using a machine originated query). It would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have utilized the user feedback feature of Jolley to modify O'Reilly as claimed because this would amount to no more than applying known techniques to a known product (method, or device) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
O'Reilly in view of Jolley teaches or at least suggests receiving, by the extension, past user interactions indicating a preferred interaction event; and generating, based on the past user interactions, the first visually impaired spectrum score (at least O'Reilly: ¶ 64: machine-learning driven changes based on user interaction events to be used to update the program code entries for a given user; ¶ 88: the one or more machine learning models may generate one prediction for a preferred font, one prediction for a preferred font size, one prediction for a preferred zoom level, one prediction for a preferred contrast setting, and/or the like. Thus, the output may be a predicted preferred display/presentation preferences/settings/options and corresponding confidence scores; ¶ 90: the predicted output (e.g., generated prediction) of the one or more machine learning models for a given user may be a plurality of predictions and corresponding confidence scores). However, O'Reilly in view of Jolley appears to be silent on but Muthukesavaraj teaches or at least suggests receiving past user interactions indicating at least one of a preferred auto-scrolling speed for the first user or a preferred text-to-speech conversion rate for the first user (at least ¶ 15: feedback methods are used to summarize content being displayed on a screen, which are adjusted for image resolution of text displayed on the screen and according to the user's scroll speed …; ¶ 17: automatically tracking, and optimization of scrolling speed; ¶ 19: Scroll speed provides information regarding the threshold scroll speed and impacts reading speed of the user. The scroll speed is preferably an approximate average which is calculated over a course of time from the content which is being scrolled. From multiple scroll speeds, an average scroll speed may be calculated and an additional tolerance amount may be added to or subtracted from the average to determine the approximate average scroll speed; ¶¶ 45-47, 54; ¶ 57: It is noted that the default reading speed and thus the threshold in which text is summarized based on scroll speed can be customized for users to additionally account for neuro-visual issues as well as being used to improve a user's cognitive ability). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention to have used the optimization of scrolling speed feature of Muthukesavaraj and to have modified O'Reilly in view of Jolley to allow receiving, by the extension, past user interactions indicating at least one of a preferred auto-scrolling speed for the first user or a preferred text-to-speech conversion rate for the first user, to further allow generating, based on the past user interactions, the first visually impaired spectrum score as claimed because this would amount to no more than applying known techniques to a known product (method, or device) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
[Claim 19] O'Reilly in view of Jolley and Muthukesavaraj teaches or at least suggests wherein the second trained machine learning model comprises one or more speech recognition models (at least O'Reilly: ¶ 45: The user input interface can comprise any of a number of devices allowing the user computing device 30 to receive data, such as … voice/speech). it would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified the “one or more machine learning models” of O'Reilly (¶ 88) in view of Muthukesavaraj as claimed, since it has been held that mere duplication of essential working parts of a device involves only routine skill in the art. St Regis Paper Co. V. Bemis Co., 193 USPQ 8.
[Claim 20] O'Reilly in view of Jolley and Muthukesavaraj teaches or at least suggests wherein the feedback comprises one or more of: verbal user feedback; or a response to a displayed prompt (at least O'Reilly: ¶ 45: The user input interface can comprise any of a number of devices allowing the user computing device 30 to receive data, such as … voice/speech; ¶ 57: user may respond…when prompted or selecting … via the user interface).
Re claims 3 and 13:
[Claims 3 and 13] O'Reilly in view of Jolley appears to be silent on auto-scrolling of the readable document based on the one or more adjusted accessibility settings. However, the concept and advantages of auto-scrolling for improved user interface interaction were known to one of ordinary skill in the art before the effective filing date of the invention as evident in Muthukesavaraj (at least ¶ 15: feedback methods are used to summarize content being displayed on a screen, which are adjusted for image resolution of text displayed on the screen and according to the user's scroll speed …; ¶ 17: automatically tracking, and optimization of scrolling speed; ¶ 19: Scroll speed provides information regarding the threshold scroll speed and impacts reading speed of the user. The scroll speed is preferably an approximate average which is calculated over a course of time from the content which is being scrolled. From multiple scroll speeds, an average scroll speed may be calculated and an additional tolerance amount may be added to or subtracted from the average to determine the approximate average scroll speed; ¶¶ 45-47, 54; ¶ 57: It is noted that the default reading speed and thus the threshold in which text is summarized based on scroll speed can be customized for users to additionally account for neuro-visual issues as well as being used to improve a user's cognitive ability). Hence, it would have been prima facie obvious to one of ordinary skill in the art, before the effective filing date of the invention, to have modified O'Reilly as claimed because this would amount to no more than applying known techniques to a known method (device, or product) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
Re claims 5 and 15:
[Claims 5 and 15] O'Reilly in view of Jolley teaches or at least suggests receiving, by the extension, past user interactions indicating a preferred interaction event; and generating, based on the past user interactions, the first visually impaired spectrum score (at least O'Reilly: ¶ 64: machine-learning driven changes based on user interaction events to be used to update the program code entries for a given user; ¶ 88: the one or more machine learning models may generate one prediction for a preferred font, one prediction for a preferred font size, one prediction for a preferred zoom level, one prediction for a preferred contrast setting, and/or the like. Thus, the output may be a predicted preferred display/presentation preferences/settings/options and corresponding confidence scores; ¶ 90: the predicted output (e.g., generated prediction) of the one or more machine learning models for a given user may be a plurality of predictions and corresponding confidence scores). However, O'Reilly in view of Jolley appears to be silent on but Muthukesavaraj teaches or at least suggests receiving past user interactions indicating at least one of a preferred auto-scrolling speed for the first user or a preferred text-to-speech conversion rate for the first user (at least ¶ 15: feedback methods are used to summarize content being displayed on a screen, which are adjusted for image resolution of text displayed on the screen and according to the user's scroll speed …; ¶ 17: automatically tracking, and optimization of scrolling speed; ¶ 19: Scroll speed provides information regarding the threshold scroll speed and impacts reading speed of the user. The scroll speed is preferably an approximate average which is calculated over a course of time from the content which is being scrolled. From multiple scroll speeds, an average scroll speed may be calculated and an additional tolerance amount may be added to or subtracted from the average to determine the approximate average scroll speed; ¶¶ 45-47, 54; ¶ 57: It is noted that the default reading speed and thus the threshold in which text is summarized based on scroll speed can be customized for users to additionally account for neuro-visual issues as well as being used to improve a user's cognitive ability). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention to have used the optimization of scrolling speed feature of Muthukesavaraj and to have modified O'Reilly in view of Jolley to allow receiving, by the extension, past user interactions indicating at least one of a preferred auto-scrolling speed for the first user or a preferred text-to-speech conversion rate for the first user, to further allow generating, based on the past user interactions, the first visually impaired spectrum score as claimed because this would amount to no more than applying known techniques to a known method (device, or product) ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 416 (2007) (“The combination of familiar elements according to known methods is likely to be obvious when it does no more than yield predictable results.”).
Conclusion
The prior art made of record and not relied upon is listed in the attached PTO
Form 892 and is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDDY SAINT-VIL whose telephone number is (571)272-9845. The examiner can normally be reached Mon-Fri 6:30 AM -6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PETER VASAT can be reached on (571) 270-7625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDDY SAINT-VIL/Primary Examiner, Art Unit 3715