Prosecution Insights
Last updated: April 19, 2026
Application No. 18/399,151

INTERACTIVE WHITEBOARD USING ARTIFICIAL INTELLIGENCE

Final Rejection §103
Filed
Dec 28, 2023
Examiner
SILVERMAN, SETH ADAM
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
88%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
327 granted / 449 resolved
+17.8% vs TC avg
Moderate +15% lift
Without
With
+14.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
47 currently pending
Career history
496
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
20.1%
-19.9% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 449 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/28/2023 & 5/28/2025 was filed before the first office action. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejection Notes In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s)1, 3, 5, 7, 9-12, 14, and 17-18, are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20220019932 A1, published: 1/20/2022), in view of Kato et al. (US 20180069962 A1, published: 3/8/2018). Claim 1. (Currently Amended): Wang teaches a system, comprising: one or more processors; and memory storing instructions that are operable and when executed by the one or more processors (computer-readable storage medium coupled to one or more processors and having instructions stored thereon [Wang, 0007]), cause the one or more processors to: receive a handwritten input via a whiteboard user interface rendered by a computing device, the handwritten input including a handwritten text string or a hand-drawn sketch, wherein the handwritten input is displayed in real-time at the whiteboard user interface as whiteboard content (during a workshop, a sketch of the data model can be drawn on a whiteboard, paper, napkin, or the like. That is, the data model can be represented in a sketch that is a real-world, physical artifact. After the workshop, the sketch is used by a development team to begin development of an OData service based on the sketch. Using the sketch, software developers define the EDM of the OData service [Wang, 0023]); formulate an input prompt that includes data indicative of an image containing the handwritten input, as well as a request to generate additional whiteboard content about one or more topics or entities detected in the handwritten input (the image 300 is processed by the image classification service 240, which processes the image 300 through a ML model (e.g., a CNN) to selectively classify the image 300 as depicting a hand-drawn sketch. For example, the image 300 is provided as input to the ML model, which provides an output indicating a classification (e.g., hand-drawn, not hand-drawn) of the image 300. If the classification does not indicate that the image 300 depicts a hand-drawn sketch, the image 300 is determined to be irrelevant and further processing of the image 300 ends [Wang, 0042]; [Wang, FIGs. 3A, 4]; Examiner's Note: as illustrated in the figures, implicit prompting); process the input prompt using a generative machine learning model to generate the additional whiteboard content about one or more topics or entities detected in the handwritten input (implementations of the present disclosure leverage ML models for automatic generation of OData services. In some examples, the ML models are existing ML models (e.g., ML models provided by third-party providers). That is, implementations of the present disclosure can be realized without generating ML models from scratch. In some examples, one or more ML models are provided as CNNs of artificial intelligence (AI) service provider (e.g., Functional Services API of SAP Leonardo Machine Learning Foundation provided by SAP SE of Walldorf, Germany) [Wang, 0065]); and cause the additional whiteboard content about one or more topics or entities to be rendered at the whiteboard user interface in a location offset from the handwritten input (FIG. 4 depicts an example diagram 400 of an OData service generated from the example sketch 300 of FIG. 3A. The example diagram 400 is a visual depiction of the example EDM of Listing 1 above. More particularly, the example diagram 400 includes entities 402, 404, 406 representing the entities determined from the image 300 and includes associations 408, 410 representing associations determined from the image 300 [Wang, 0054, FIGs. 3A & 4]; Examiner's Note: as illustrated in the figures cited). Wang does not teach generate the additional whiteboard content detected in the handwritten input, wherein the additional whiteboard content comprises at least some content that is exclusive of, but is responsive to, the handwritten input. However, Kato teaches generate the additional whiteboard content detected in the handwritten input, wherein the additional whiteboard content comprises at least some content that is exclusive of, but is responsive to, the handwritten input (the gesture conversion is processing to convert a gesture event input to the communication terminal 500f that is the electronic blackboard by a user with an electronic pen or in handwriting into data in a format receivable by the whiteboard unit 620 [Kato, 0261]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the real-time handwritten whiteboard UI invention of Wang to include the additional input responsive to handwritten input features of Kato. One would have been motivated to make these modification to expand functionality by interpreting certain handwritten input as gesture input. Claim 11 having similar claim elements to claim 1, is likewise rejected. Claim 3: The combination of Wang and Kato, teaches the system of claim 1. Wang further teaches wherein the instructions further cause the one or more processors to: cause one or more GUI elements to be rendered at the whiteboard user interface, the one or more GUI elements each for including a distinct type of content in the additional whiteboard content, receive a control input that selects a particular GUI element, from the one or more GUI elements, for including a particular type of content in the additional whiteboard content, and formulate the input prompt to further include the control input, in addition to including the data indicative of the image that contains the handwritten input and the request to generate the additional whiteboard content (the key property of the entity [Supplier] (i.e., [SupplierID]) is populated to a list of UI controls of ObjectCells on the master page, and the data to be rendered on the ObjectCell is bound to the entity [Supplier] by setting the [EntitySet] and [Service] in the property group [Target] [Wang, 0063]. The processor 610 is capable of processing instructions stored in the memory 620 or on the storage device 630 to display graphical information for a user interface on the input/output device 640 [Wang, 0079]). Claim 14, having similar claim elements to claim 3, is likewise rejected. Claim 5: The combination of Wang and Kato, teaches the system of claim 1. Wang further teaches wherein the generative machine learning model is a transformer-based machine learning model (implementations of the present disclosure are directed to a service provisioning platform for automatically generating OData services from images using machine learning (ML) [Wang, 0004]). Claim 18, having similar claim elements to claim 5, is likewise rejected. Claim 7: The combination of Wang and Kato, teaches the system of claim 1, wherein the handwritten input includes a mathematical question, and the additional whiteboard content includes a solution to the mathematical question (Examiner's Note: a whiteboard can implicitly be used to write ANYTHING on, including a mathematical question and its solution). Claim 17, having similar claim elements to claim 7, is likewise rejected. Claim 9: The combination of Wang and Kato, teaches the system of claim 1. Wang further teaches wherein the input prompt is formulated in response to at least one triggering condition, of a plurality of pre-determined triggering conditions each for triggering generation of the additional whiteboard content, being satisfied (a sketch of a data model, which is hand drawn on a physical, real-world artifact (e.g., a whiteboard during a design thinking workshop) is recorded in an image (digital image) by a device (e.g., mobile device) and the image is submitted to the OData EDM generator, which is executed in a cloud-computing environment. Real-time may be used to describe operations that are automatically executed in response to a triggering event, for example, without requiring human input [Wang, 0027]). Claim 10: The combination of Wang and Kato, teaches the system of claim 9. Wang further teaches wherein the at least one triggering condition being no additional handwritten input being received after a predefined duration since receiving the handwritten input, or being receiving a user confirmation that confirms the request to generate the additional whiteboard content (a sketch of a data model, which is hand drawn on a physical, real-world artifact (e.g., a whiteboard during a design thinking workshop) is recorded in an image (digital image) by a device (e.g., mobile device) and the image is submitted to the OData EDM generator, which is executed in a cloud-computing environment. Real-time may be used to describe operations that are automatically executed in response to a triggering event, for example, without requiring human input [Wang, 0027]). Claim 12: The combination of Wang and Kato, teaches the method of claim 11. Wang further teaches wherein the image containing the handwritten input is acquired from the whiteboard user interface (a sketch of a data model, which is hand drawn on a physical, real-world artifact (e.g., a whiteboard during a design thinking workshop) is recorded in an image (digital image) by a device [Wang, 0027]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 4, 6, 8, 13, 15, and 16, are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20220019932 A1, published: 1/20/2022), in view of Kato et al. (US 20180069962 A1, published: 3/8/2018), and in further view of Nelson et al. (US 20190108494 A1, published: 4/11/2019). Claim 2: The combination of Wang and Kato, teaches the system of claim 1. The combination of Wang and Kato, does not teach wherein the instructions further cause the one or more processors to: receive a spoken utterance, the spoken utterance being received contemporaneously with the handwritten input, and formulate the input prompt to further include a transcript of the spoken utterance, in addition to including the data indicative of the image that contains the handwritten input and the request to generate the additional whiteboard content. However, Nelson teaches wherein the instructions further cause the one or more processors to: receive a spoken utterance, the spoken utterance being received contemporaneously with the handwritten input, and formulate the input prompt to further include a transcript of the spoken utterance, in addition to including the data indicative of the image that contains the handwritten input and the request to generate the additional whiteboard content (Meeting intelligence apparatus 102 may analyze meeting content data using any of a number of tools, such as speech or text recognition, voice or face identification, sentiment analysis, object detection, gestural analysis, thermal imaging, etc. Based on analyzing the meeting content data and/or in response to requests, for example, from electronic meeting applications, meeting intelligence apparatus 102, either alone or in combination with one or more electronic meeting applications, performs any of a number of automated tasks [Nelson, 0115]. The use of speech and/or text recognition provides a more favorable user experience by allowing users to manage various aspects of electronic meetings using voice commands and/or text commands [Nelson, 0184]. translation/transcription services S1 and S2 are likely to provide accurate results when processing text or audio data, or a portion thereof, that corresponds to “Speaker A” speaking, while translation/transcription service S3 is likely to provide accurate results when “Speaker D” is speaking [Nelson, 0275]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the real-time handwritten whiteboard UI invention of the combination of Wang and Kato, to include the voice input, user profile, search, touchscreen, and natural language features of Nelson. One would have been motivated to make these modification to improve the user experience by providing a feature rich whiteboard user interface, that gives users more options than the prior art. Claim 13, having similar claim elements to claim 2, is likewise rejected. Claim 4: The combination of Wang and Kato, teaches the system of claim 1. Wang does not teach wherein the instructions further cause the one or more processors to: retrieve a user profile associated with a registered user account of a whiteboard application that provides access to the whiteboard user interface, and formulate the input prompt to further include the user profile, in addition to including the data indicative of the image that contains the handwritten input and the request to generate the additional whiteboard content. However, Nelson teaches wherein the instructions further cause the one or more processors to: retrieve a user profile associated with a registered user account of a whiteboard application that provides access to the whiteboard user interface, and formulate the input prompt to further include the user profile, in addition to including the data indicative of the image that contains the handwritten input and the request to generate the additional whiteboard content (users at each participating node may specify a language for their node, for example via meeting controls 222, which may be used as a default language for that node. Users may also specify a preferred language in their user profile, or in association with their user credentials, to allow an electronic meeting application to automatically default to the preferred language for a meeting participant [Nelson, 0217]. It is presumed that a user selected the meeting participant “Sue K.” and participant analysis report 720 depicts a meeting participant profile for meeting participant “Sue K.” [Nelson, 0240]. Controls, such as “+” and “−” allow a user to view the meeting participant's profile for a particular electronic meeting [Nelson, 0241]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the real-time handwritten whiteboard UI invention of the combination of Wang and Kato, to include the voice input, user profile, search, touchscreen, and natural language features of Nelson. One would have been motivated to make these modification to improve the user experience by providing a feature rich whiteboard user interface, that gives users more options than the prior art. Claim 15, having similar claim elements to claim 4, is likewise rejected. Claim 6: The combination of Wang and Kato, teaches the system of claim 11. Wang further teaches wherein the handwritten input is a hand-drawn object, determined based on a trained machine learning model trained for object classification (implementations of the present disclosure are directed to a service provisioning platform that automatically generates OData services from images using machine learning (ML). Implementations can include actions of receiving, by an OData service generation platform executed in one or more cloud-computing environments, an image including data representative of a sketch on a physical artifact [Wang, 0021]). The combination of Wang and Kato, does not teach and the additional whiteboard content includes content responsive to a search engine query, wherein the search engine query is formulated based on an object type of the hand-drawn object. However, Nelson teaches and the additional whiteboard content includes content responsive to a search engine query, wherein the search engine query is formulated based on an object type of the hand-drawn object ([Nelson, 0309]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the real-time handwritten whiteboard UI invention of the combination of Wang and Kato, to include the voice input, user profile, search, touchscreen, and natural language features of Nelson. One would have been motivated to make these modification to improve the user experience by providing a feature rich whiteboard user interface, that gives users more options than the prior art. Claim 16, having similar claim elements to claim 6, is likewise rejected. Claim 8: The combination of Wang and Kato, teaches the system of claim 1. The combination of Wang and Kato, does not teach wherein the whiteboard user interface is rendered at an electronic display of the computing device, and wherein the electronic display comprises a touchscreen display. However, Nelson teaches wherein the whiteboard user interface is rendered at an electronic display of the computing device, and wherein the electronic display comprises a touchscreen display (a user may select a visual control by touching display 1740 with their finger, using a stylus, etc. [Nelson, 0390]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the real-time handwritten whiteboard UI invention of the combination of Wang and Kato, to include the voice input, user profile, search, touchscreen, and natural language features of Nelson. One would have been motivated to make these modification to improve the user experience by providing a feature rich whiteboard user interface, that gives users more options than the prior art. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20220019932 A1, published: 1/20/2022), in view of Fieldman (US 10126927 B1, published: 11/13/2018). Claim 21: Wang teaches a method implemented using one or more processors (computer-readable storage medium coupled to one or more processors and having instructions stored thereon [Wang, 0007]), the method comprising: receiving a handwritten input via a whiteboard user interface rendered by a computing device, the handwritten input including a handwritten sketch, wherein the handwritten input is displayed in real-time at the whiteboard user interface as whiteboard content (during a workshop, a sketch of the data model can be drawn on a whiteboard, paper, napkin, or the like. That is, the data model can be represented in a sketch that is a real-world, physical artifact. After the workshop, the sketch is used by a development team to begin development of an OData service based on the sketch. Using the sketch, software developers define the EDM of the OData service [Wang, 0023]); formulating an input prompt that includes data indicative of an image containing the handwritten input, as well as a request to generate additional whiteboard content detected in the handwritten input (the image 300 is processed by the image classification service 240, which processes the image 300 through a ML model (e.g., a CNN) to selectively classify the image 300 as depicting a hand-drawn sketch. For example, the image 300 is provided as input to the ML model, which provides an output indicating a classification (e.g., hand-drawn, not hand-drawn) of the image 300. If the classification does not indicate that the image 300 depicts a hand-drawn sketch, the image 300 is determined to be irrelevant and further processing of the image 300 ends [Wang, 0042]; [Wang, FIGs. 3A, 4]; Examiner's Note: as illustrated in the figures, implicit prompting); processing the input prompt using a generative machine learning model to generate the additional whiteboard content detected in the handwritten input (implementations of the present disclosure leverage ML models for automatic generation of OData services. In some examples, the ML models are existing ML models (e.g., ML models provided by third-party providers). That is, implementations of the present disclosure can be realized without generating ML models from scratch. In some examples, one or more ML models are provided as CNNs of artificial intelligence (AI) service provider (e.g., Functional Services API of SAP Leonardo Machine Learning Foundation provided by SAP SE of Walldorf, Germany) [Wang, 0065]); and causing the additional whiteboard content about one or more topics or entities to be rendered at the whiteboard user interface in a location offset from the handwritten input (FIG. 4 depicts an example diagram 400 of an OData service generated from the example sketch 300 of FIG. 3A. The example diagram 400 is a visual depiction of the example EDM of Listing 1 above. More particularly, the example diagram 400 includes entities 402, 404, 406 representing the entities determined from the image 300 and includes associations 408, 410 representing associations determined from the image 300 [Wang, 0054, FIGs. 3A & 4]; Examiner's Note: as illustrated in the figures cited). Wang does not teach the handwritten input including a handwritten sketch of a structural chemical formula; generate additional whiteboard content about a chemical detected in the handwritten input; wherein the additional whiteboard content comprises at least one of a label of the chemical detected in the handwritten input, a caption about the chemical detected in the handwritten input, a prompt about the chemical detected in the handwritten input, a molecular model, or a description of the chemical detected in the handwritten inputs. However, Fieldman teaches the handwritten input including a handwritten sketch of a structural chemical formula; generate additional whiteboard content about a chemical detected in the handwritten input; wherein the additional whiteboard content comprises at least one of a label of the chemical detected in the handwritten input, a caption about the chemical detected in the handwritten input, a prompt about the chemical detected in the handwritten input, a molecular model, or a description of the chemical detected in the handwritten inputs (the OSES Server may cause a Whiteboard Editor GUI to be displayed to the user which provides the user with the ability to annotate and/or edit (e.g., crop, draw on, resize, apply filters, etc.) the received image/photo before it is inserted into an OCD Room. Detect and parse chemical formula(s) which are identified in the received photo/image, and display the identified chemical formula(s) as an editable formula object (e.g., rather than an image). This formula object may be edited to add variables, apply solvers, etc., and/or to automatically generate models of detected chemical compounds. Detect and parse spreadsheet tables which are identified in the received photo/image, and convert such recognized spreadsheet tables into editable spreadsheet tables [Fieldman 51:last par - 52:first par]). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the real-time handwritten whiteboard UI invention of Wang to include the chemical formula detection with handwritten input features of Fieldman. One would have been motivated to make these modification so that users can work with chemical equations, and that said chemical equations may be generated to be added to the text of a whiteboard. Such would assist user's with jobs related to chemistry. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SETH A SILVERMAN whose telephone number is (571)272-9783. The examiner can normally be reached Mon-Thur, 8AM-4PM MST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571)272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Seth A Silverman/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Sep 12, 2025
Non-Final Rejection — §103
Dec 04, 2025
Applicant Interview (Telephonic)
Dec 04, 2025
Examiner Interview Summary
Dec 09, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587581
SYSTEMS, METHODS, AND MEDIA FOR CAUSING AN ACTION TO BE PERFORMED ON A USER DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12579201
INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12578200
NAVIGATIONAL USER INTERFACES
2y 5m to grant Granted Mar 17, 2026
Patent 12572269
PERFORMING A CONTROL OPERATION BASED ON MULTIPLE TOUCH POINTS
2y 5m to grant Granted Mar 10, 2026
Patent 12572261
SPATIAL NAVIGATION AND CREATION INTERFACE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
88%
With Interview (+14.8%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 449 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month