Prosecution Insights
Last updated: April 19, 2026
Application No. 19/055,329

UNIVERSAL HIGHLIGHTER FOR CONTEXTUAL NOTETAKING

Non-Final OA §102§103
Filed
Feb 17, 2025
Examiner
SHEN, YUZHEN
Art Unit
2623
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
507 granted / 720 resolved
+8.4% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
44 currently pending
Career history
764
Total Applications
across all art units

Statute-Specific Performance

§101
0.2%
-39.8% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
27.3%
-12.7% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 720 resolved cases

Office Action

§102 §103
Detailed Action 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 2. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. 3. Claims 35-40 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by KIM (US 20210005669 A1). Regarding claim 35, Zhu (Figs. 1-3) discloses a device for interactive notetaking, the device comprising: a display (Fig. 2; display of user interface); a processor (Fig. 3; processing unit 304); and memory (Fig. 3; memory 306) storing instructions ([0050]) that, when executed by the processor (processing unit 304; [0019]), cause the device to perform a set of operations, the set of operations comprising: receiving a selection of one or more regions of interest displayed on the display, wherein the one or more selected regions include pixel data of content, and the pixel data comprises a copy of raster image data corresponding to the selection of the one or more regions of interest, for object recognition of the content (Fig. 2 shows an example of interactive notetaking, a data object (e.g. electronic contact card for Seth Bing) including text data and graphic data from the digital document collection 206 of a display is selected, dragged and dropped on the content portion 204 of the digital document 202, which generates a composite data object 222; [0045]-[0046]); detecting object data associated with the content, by analyzing the pixel data of the content (Fig. 2 and [0013], [0031]-[0034], and [0046]; detection, analysis, and identification of the data object associated with the content); automatically determining context data of the content (Figs. 1-2 and [0038]-[0039]; determining context data of the content); automatically generating note data comprising the detected object data associated with the content and the determined context data of the content (Figs. 1-2 and [0047]-[0048]; generating a note data or a composite data comprising object data and context data); and displaying the note data via the display (Figs. 2C and 2D; displaying note data). Regarding claim 36, Zhu (Figs. 1-3) discloses the device of claim 35, wherein the automatically determining context data of the content comprises: automatically determining the context data of the content using a machine learning model trained to predict the context data based on the content ([0015], [0025], and [0036]; determining context data using a machine learning model). Regarding claim 37, Zhu (Figs. 1-3) discloses the device of claim 35, wherein the machine learning model is personalized to a user associated with the selection of the one or more regions of interest for notetaking ([0015], [0028], and [0036]; machine learning model). Regarding claim 38, Zhu (Figs. 1-3) discloses the device of claim 35, wherein the automatically determining context data of the content comprises: evaluating one or more application opened or visited by a user on a computing device (Figs. 1-2 and [0019]-[0021], [0025], and [0044]-[0048]; determining context data, application executed by user). Regarding claim 39, Zhu (Figs. 1-3) discloses the device of claim 38, wherein the evaluating one or more applications includes evaluating at least one of a user profile, a user browsing history, or a display screen of the computing device (e.g., Fig. 2, display interface; [0013]-[0014], user profile). Regarding claim 40, Zhu (Figs. 1-3) discloses the device of claim 35, wherein the pixel data further comprises an indication of at least one of a position of the content or a size of the content on the display ([0031] and [0033]-[0034]; position and size of the content). Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 21-34 are rejected under 35 U.S.C. 103 as unpatentable over Zhu (US 20190163339 A1) in view of Brown (US 20120110458 A1). Regarding claim 21, Zhu discloses a computer-implemented method for interactive notetaking (Figs. 1-2; interactive notetaking), comprising: receiving a selection of one or more regions of interest for notetaking, wherein the one or more selected regions include pixel data of content, and the pixel data comprises a copy of raster image data corresponding to the selection of the one or more regions of interest, for object recognition of the content (Fig. 2 shows an example of interactive notetaking, a data object (e.g. electronic contact card for Seth Bing) including text data and graphic data from the digital document collection 206 is selected, dragged and dropped on the content portion 204 of the digital document 202, which generates a composite data object 222; [0045]-[0046]); evaluating the pixel data of the content, using the object recognition, to detect object data associated with the content (Fig. 2 and [0013], [0031]-[0034], and [0046]; detection, analysis, and identification of the data object associated with the content); automatically determining context data of the content (Figs. 1-2 and [0038]-[0039]; determining context data of the content); automatically generating note data comprising the detected object data associated with the content and the determined context data of the content (Figs. 1-2 and [0047]-[0048]; generating a note data or a composite data comprising object data and context data). Zhu discloses storing the generated data (Fig. 3 and [0050]), but does not expressly discloses storing the note data. However, Brown (Figs. 3-6) discloses a computer-implemented method for interactive notetaking, comprising: generating note data comprising the object data associated with the content and the determined context data of the content (Figs. 3 and 5; generating note data), and storing the note data (Fig. 6; saving note data). Therefore, it would have been obvious to one skilled in the art at the effective filing date of the claimed invention to incorporate the teaching from Brown to the interactive notetaking method of Zhu so that the created note can be saved and used in the future. Regarding claim 22, Zhu in view of Brown discloses the method of claim 21, Zhu (Figs. 1-2) discloses wherein the automatically determining context data of the content comprises: automatically determining the context data of the content using a machine learning model trained to predict the context data based on the content ([0015], [0025], and [0036]; determining context data using a machine learning model). Regarding claim 23, Zhu in view of Brown discloses the method of claim 22, Zhu (Figs. 1-2) discloses wherein the machine learning model is personalized to a user that is associated with the selection of the one or more regions of interest for notetaking ([0015], [0028], and [0036]; machine learning model). Regarding claim 24, Zhu in view of Brown discloses the method of claim 21, Zhu (Figs. 1-2) discloses wherein the automatically determining context data of the content comprises: evaluating one or more application opened or visited by a user on a computing device (Figs. 1-2 and [0019]-[0021], [0025], and [0044]-[0048]; determining context data, application executed by user). Regarding claim 25, Zhu in view of Brown discloses the method of claim 24, Zhu (Figs. 1-2) discloses wherein the evaluating one or more applications includes evaluating at least one of a user profile, a user browsing history, or a display screen of the computing device (e.g., Fig. 2, display interface; [0013]-[0014], user profile). Regarding claim 26, Zhu in view of Brown discloses the method of claim 21, Zhu (Figs. 1-2) discloses wherein the pixel data further comprises at least one of an indication of a position of the content or a size of the content ([0031] and [0033]-[0034]; position and size of the content). Regarding claim 27, Zhu in view of Brown discloses the method of claim 21, Zhu (Figs. 1-2) discloses wherein the object data includes at least one of text data or image data (e.g., Fig. 2; text data, image data). Regarding claim 28, Zhu (Figs. 1-3) discloses a system for interactive notetaking, the system comprising a processor (Fig. 3; processing unit 304) configured to execute a method comprising: receiving a selection of one or more regions of interest for notetaking, wherein the one or more selected regions include pixel data of content, and the pixel data comprises a copy of raster image data corresponding to the selection of the one or more regions of interest, for object recognition of the content (Fig. 2 shows an example of interactive notetaking, a data object (e.g. electronic contact card for Seth Bing) including text data and graphic data from the digital document collection 206 is selected, dragged and dropped on the content portion 204 of the digital document 202, which generates a composite data object 222; [0045]-[0046]); evaluating the pixel data of the content, using the object recognition, to detect object data associated with the content (Fig. 2 and [0013], [0031]-[0034], and [0046]; detection, analysis, and identification of the data object associated with the content); automatically determining context data of the content (Figs. 1-2 and [0038]-[0039]; determining context data of the content); automatically generating note data comprising the detected object data associated with the content and the determined context data of the content (Figs. 1-2 and [0047]-[0048]; generating a note data or a composite data comprising object data and context data). Zhu discloses storing the generated data (Fig. 3 and [0050]), but does not expressly discloses storing the note data. However, Brown (Figs. 3-6) discloses a computer-implemented method for interactive notetaking, comprising: generating note data comprising the object data associated with the content and the determined context data of the content (Figs. 3 and 5; generating note data), and storing the note data (Fig. 6; saving note data). Therefore, it would have been obvious to one skilled in the art at the effective filing date of the claimed invention to incorporate the teaching from Brown to the interactive notetaking method of Zhu so that the created note can be saved and used in the future. Regarding claim 29, Zhu in view of Brown discloses the system of claim 28, Zhu (Figs. 1-3) discloses wherein the automatically determining context data of the content comprises: automatically determining the context data of the content using a machine learning model trained to predict the context data based on the content ([0015], [0025], and [0036]; determining context data using a machine learning model). Regarding claim 30, Zhu in view of Brown discloses the system of claim 29, Zhu (Figs. 1-3) discloses wherein the machine learning model is personalized to a user associated with the selection of the one or more regions of interest for notetaking ([0015], [0028], and [0036]; machine learning model). Regarding claim 31, Zhu in view of Brown discloses the system of claim 28, Zhu (Figs. 1-3) discloses wherein the automatically determining context data of the content comprises: evaluating one or more application opened or visited by a user on a computing device (Figs. 1-2 and [0019]-[0021], [0025], and [0044]-[0048]; determining context data, application executed by user). Regarding claim 32, Zhu in view of Brown discloses the system of claim 31, Zhu (Figs. 1-3) discloses wherein the evaluating one or more applications includes evaluating at least one of a user profile, a user browsing history, or a display screen of the computing device (e.g., Fig. 2, display interface; [0013]-[0014], user profile). Regarding claim 33, Zhu in view of Brown discloses the system of claim 28, Zhu (Figs. 1-3) discloses wherein the pixel data further comprises an indication of at least one of a position of the content or a size of the content ([0031] and [0033]-[0034]; position and size of the content). Regarding claim 34, Zhu in view of Brown discloses the system of claim 28, Zhu (Figs. 1-3) discloses wherein the object data includes at least one of text data or image data (e.g., Fig. 2; text data, image data). Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUZHEN SHEN whose telephone number is (571)272-1407. The examiner can normally be reached on 9:00-18:00. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached on 571-272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YUZHEN SHEN/Primary Examiner, Art Unit 2623
Read full office action

Prosecution Timeline

Feb 17, 2025
Application Filed
Dec 14, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593575
DISPLAY PANEL AND DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12585363
INFORMATION INPUT DEVICE, SENSITIVITY DETERMINATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12573232
DISPLAY DEVICE INCLUDING A FINGERPRINT SENSOR
2y 5m to grant Granted Mar 10, 2026
Patent 12566497
GAZE TRACKING BASED NOTIFICATION TRACKING TIMER
2y 5m to grant Granted Mar 03, 2026
Patent 12562001
Adaptive Fingerprint-Enrollment to Finger Characteristics Using Local Under-Display Fingerprint Sensor in an Electronic Device
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
84%
With Interview (+13.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 720 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month