Prosecution Insights
Last updated: April 19, 2026
Application No. 18/148,343

SYSTEMS AND METHODS FOR DYNAMIC SKETCHING WITH EXAGGERATED CONTENT

Non-Final OA §103
Filed
Dec 29, 2022
Examiner
DAVIS, DAVID DONALD
Art Unit
2627
Tech Center
2600 — Communications
Assignee
Wacom Co. Ltd.
OA Round
7 (Non-Final)
70%
Grant Probability
Favorable
7-8
OA Rounds
3y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
631 granted / 900 resolved
+8.1% vs TC avg
Moderate +9% lift
Without
With
+9.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
941
Total Applications
across all art units

Statute-Specific Performance

§101
1.2%
-38.8% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
40.8%
+0.8% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 900 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 19, 2025 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-8 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Petill et al (2021/0012113) in view of Rhodes et al (US 2019/0096129). As per claim 1 Petill et al discloses: A method comprising: receiving one or more signals indicative of a plurality of spatial positions of a position indicator in a 3-dimensional space {figures 6, 15, and 16 & [0077] As another example, the natural language input may be a text input received via a virtual keyboard or another type of GUI element that the user may interact with via gestures detected by the camera device 18. See also [0086]}; receiving one or more signals indicative of a surface of a physical object 30E in the 3-dimensional space {figures 3-5D and 16 & [0083] At 204, the method 200 may include detecting a physical object in the physical environment. Physical objects may be detecting using a scene reconstruction and deconstruction pipeline described above with reference to FIG. 4. One example process is illustrated in FIG. 5 (A) which shows a portion of an example depth image centered at the physical table object 30E of the physical environment 28 of FIG. 3. The depth image is processed by surface reconstruction techniques at step (1) of the surface reconstruction and decomposition pipeline 44 to generate a geometric representation 46 of the physical environment 28. FIG. 5 (B) shows an example geometric representation 46 in the form of a surface mesh. At FIG. 5 (C), object boundaries of the geometric representation 46 of the table object have been identified, and a geometric representation of the table object 30E has been segmented into a detected object.}; obtaining a plurality of coordinates of the position indicator while the position indicator moved on the surface of the physical object in the 3-dimensional in a pattern that specifies a portion of the surface of the physical object 30E based on the one or more signals indicative of the plurality of spatial positions of the position indicator and the one or more signals indicative of the surface of the physical object 30E; {figures 4 and 16 & [0047] and [0084] At 206, the method 200 may include recognizing the physical object based on a trained artificial intelligence machine learning model. In one example, the object detected at step 204 may be processed by the trained artificial intelligence learning model 40 to recognize the detected object based on different factors or characteristics of the object, such as, for example, geometric shape, size, color, surface texture, etc. Note: where in step 206 the physical object is recognized using object recognition in step (3) of figure 4}; determining whether the position indicator is on or over the portion of the surface of the physical object 30E specified by the pattern in which the position indicator is moved based on the plurality of coordinates of the position indicator obtained while the position indicator is moved on the surface of the physical object 30E and the one or more signals indicative of the plurality of spatial positions of the position indicator {figures 6 and 16 & [0086] At 210, the method 200 may include receiving a user input directed to the detected physical object in the physical environment, the user input including a user specified semantic tag. An example user input is illustrated in FIG. 6. In one example, the user input may be detected as being directed to a particular object based on a user indicated direction that may be determined based on a detected gaze direction and/or a detected gesture input. See also [0055]}; responsive to determining that the position indicator is on or over the portion of the surface of the physical object 30E, obtaining coordinates corresponding to an input gesture based on the one or more signals indicative of the plurality of spatial positions of the position indicator; and storing the coordinates corresponding to the input gesture. {figure 16, [0086] & [0087] At 212, the method 200 may include associating the detected physical object with the user specified semantic tag in the database. Both user specified semantic tags and artificial intelligence generated semantic tags may be associated with a particular physical object and stored in the database. In this manner, the database may be continuously updated in real-time according to the steps of method 200. } Regarding claim 1 Petill et al is silent as to: obtaining coordinates of the position indicator while the position indicator is moving on the surface of the physical object in the 3-dimensional space . . . wherein the portion of the surface of the physical object is smaller than the surface of the physical object; receiving a signal indicating that the portion of the surface of the physical object is an input surface in response to a mechanical actuation of a switch on the position indicator; after receiving the signal indicating that the portion of the surface of the physical object is the input surface. With respect to claim 1 Rhodes et al discloses: obtaining coordinates of the position indicator while the position indicator is moving on the surface of the physical object in the 3-dimensional space {[0154] By using depth mapping or other three-dimensional modeling method, the augmented reality composition system 110 determines a three-dimensional coordinate location of the writing device 106 on the real-world surface.} . . . wherein the portion of the surface of the physical object is smaller than the surface of the physical object; receiving a signal indicating that the portion of the surface of the physical object is an input surface in response to a mechanical actuation of a switch 303 on the position indicator; after receiving the signal indicating that the portion of the surface of the physical object is the input surface {[0061] Specifically, the augmented reality composition system 110 tracks pressure, location, orientation, and movement of the writing device 106 as the user 112 moves the writing device 106 along the real-world surface 202. } It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the method of Petill et al with obtaining coordinates of the position indicator while the position indicator is moving on the surface of the physical object in the 3-dimensional space as taught by Rhodes et al. The rationale is as follow: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a method with obtaining coordinates of the position indicator while the position indicator is moving on the surface of the physical object in the 3-dimensional space so as to identify the where the position indicator is moving for user interaction. As per claim 2 Petill et al discloses: The method of claim 1, further comprising: displaying a virtual representation of the position indicator along with a virtual representation of the portion of the surface of the physical object 30E. {figure 8 & [0067] FIG. 8 illustrates another example disambiguation technique. Based on determining that the confidence value for the selection of the target virtual object 72 or target physical object 74 is lower than the threshold value, the HMD device 24 may be configured to present a query 78 to the user for a user confirmation 80 of the selection of the target virtual object 72 or the target physical object 74. In the illustrated example, the query 78 is presented as the text “Did you mean this table 17” in a virtual text box located near the selected target physical object 74 of the first table physical object 30E. } As per claim 3 Petill et al discloses: The method of claim 1, further comprising: receiving one or more signals indicative of a plurality of positions of a switch of the position indicator {figures 6, 15 and 16, [0077] As another example, the natural language input may be a text input received via a virtual keyboard or another type of GUI element that the user may interact with via gestures detected by the camera device 18. See also [0086] }; and determining whether the switch of the position indicator is in a first position, based on the one or more signals indicative of the plurality of positions of the switch of the position indicator, wherein the obtaining of the coordinates corresponding to the input gesture is responsive to determining that the position indicator is on or over the portion of the surface of the physical object 30E and responsive to determining that the switch of the position indicator is in the first position. {figures 6-10 & [0055] – [0070] Note: where the user moves virtual object (56A) to living room table (30E) based on the user gesture input. } As per claim 4 Petill et al discloses: The method of claim 1, further comprising: translating coordinates corresponding to the portion of the surface of the physical object 30E from a first coordinate system to a second coordinate system, the first coordinate system being different from the second coordinate system. {figure 5A-D & [0051] – [0054] Note: where the physical object (30E) may be translated into one of the four types of coordinate systems as shown in the figures } As per claim 5 Petill et al discloses: The method of claim 1 wherein: the position indicator includes a plurality of reference tags, and the one or more signals indicative of the plurality of spatial positions of the position indicator are indicative of a plurality of positions of the reference tags. {figures 6-10 & [0055]-[0077] Note: where the physical objects (30E), (30F), virtual objects (56A) and the position indicator (78) have tags } As per claim 6 Petill et al discloses: The method of claim 5 wherein: each of the reference tags includes a visually distinct pattern formed thereon {figures 6-10 & [0055]-[0077] Note: where the physical objects (30E), (30F) and virtual objects (56A) have different reference tags }, and the one or more signals indicative of the plurality of spatial positions of the position indicator include image data corresponding to a plurality of images of the references tags. {figures 6-10 & [0055]-[0077] Note: where the position indicator (78) has a text message as a tag } As per claim 7 Petill et al discloses: The method of claim 5 wherein: each of the reference tags emits light, and the one or more signals indicative of the plurality of spatial positions of the position indicator include image data corresponding to a plurality of images of the references tags. {figures 6-10 & [0055]-[0077] Note: where the tags emit light as they are displayed on HMD device (24) and include image data as they have text labels} As per claim 8 Petill et al discloses: A method comprising: receiving one or more signals indicative of a plurality of spatial positions of a position indicator in a 3-dimensional space {figures 6, 15, and 16 & [0077] As another example, the natural language input may be a text input received via a virtual keyboard or another type of GUI element that the user may interact with via gestures detected by the camera device 18. See also [0086]}; obtaining one or more signals indicative of a scaling factor; {figure 1, [0061] In one example, to identify the user specified operation 64, the natural language processing module 42 may be configured to identify a verb in the natural language input 62, and compare the identified verb to the list of operations for virtual object 68. For example, the verbs “move”, “transfer”, “put”, etc., may each be associated with a move operation. As another example, the verbs “open”, “show”, “start”, etc., may each be associated with the application start operation. As yet another example, the verbs “modify”, “resize”, “change”, etc., may each be associated with the modification operation. It should be appreciated that the specific operations and associated verbs/terms discussed above are merely exemplary, and that other types of operations may be performed by the virtual object handling module 70, and each operation may be associated with other verbs/terms identified by the natural language processing module 42. As a few other non-limiting examples, the operations 68 may further include a copy operation, a deletion operation, a resizing operation, a fitting operation, etc. } obtaining coordinates corresponding to an input gesture in the 3-dimensional space based on the one or more signals indicative of the plurality of spatial positions of the position indicator; {figures 1, 6 and 15 & [0077] At 104, the method 100 may include receiving a natural language input from a user via an input device. In one example, the natural language input is a voice input received via an input device. However, in scenarios where voice input is not useable (e.g. noisy environment, broken microphone, etc.) or to provide affordance for users that are unable to effectively use voice commands such as users with a voice disability, method 100 may include receiving the natural language input via alternative input modalities. For example, the natural language input may take the form of a text input that is received via a physical keyboard or controller input device. As another example, the natural language input may be a text input received via a virtual keyboard or another type of GUI element that the user may interact with via gestures detected by the camera device 18.} scaling the plurality of coordinates of the input gesture based on the one or more signals indicative of the scaling factor { [0059] As another example, the natural language input 62 may be a text input received via a virtual keyboard or another type of GUI element that the user may interact with via gestures detected by the camera device 18. In these scenarios, the natural language input 62 may be processed using the natural language processing module 42 executed by the deep neural network processor 38 and/or the processor 12. & [0061]}; and displaying a virtual representation of the input gesture based on the scaling of the coordinates of the input gesture. {figures 1, 6, 7 and 15 & {0055], [0061] and [0078-[0081] Note: where in steps (106, 108, 110, 112) a specified operation to modify a characteristic of the virtual object such as resizing gesture is received and the resized virtual object is displayed} Regarding claim 8 Petill et al is silent as to: a scaling factor based on an amount of pressure applied to a tip of a core body that extends from an opening formed in a case of the position indicator; receiving one or more signals indicative of a plurality of positions of a switch of the position indicator, wherein the plurality of positions includes an open position and a closed position; obtaining coordinates corresponding to an input gesture indicated by movement of the position indicator in the 3-dimensional space while the switch of the position indicator is depressed based on the one or more signals indicative of the plurality of spatial positions of the position indicator and the one or more signals indicative of the plurality of positions of the switch of the position indicator. With respect to claim 8 Rhodes et al discloses: a scaling factor based on an amount of pressure applied to a tip of a core body that extends from an opening formed in a case of the position indicator; { [0061] Specifically, the augmented reality composition system 110 tracks pressure, location, orientation, and movement of the writing device 106 as the user 112 moves the writing device 106 along the real-world surface 202.} receiving one or more signals indicative of a plurality of positions of a switch 302 {figure 3A} of the position indicator 106; { [0082] Similar to how the augmented reality composition system 110 detects pressure applied to the pressure sensor 302 of the writing device 106 to add digital marks, the augmented reality composition system 110 further detects pressure applied to the pressure sensor 301 of the writing device 106 to remove digital marks.} obtaining coordinates corresponding to an input gesture indicated by movement of the position indicator 106 in the 3-dimensional space { [0154] By using depth mapping or other three-dimensional modeling method, the augmented reality composition system 110 determines a three-dimensional coordinate location of the writing device 106 on the real-world surface.}, wherein the plurality of positions includes an open position and a closed position {figures 3A-3F} while the switch of the position indicator is in the closed position based on the one or more signals indicative of the plurality of spatial positions of the position indicator and the one or more signals indicative of the plurality of positions of the switch of the position indicator. { [0082] To illustrate, the augmented reality composition system 110 detects variations of pressure applied by the user 112 to depress the eraser 303 on the back end of the writing device 106 into the pressure sensor 301. } It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the method of Petill et al with obtaining coordinates of the position indicator while the position indicator is moving on the surface of the physical object in the 3-dimensional space as taught by Rhodes et al. The rationale is as follow: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a method with obtaining coordinates of the position indicator while the position indicator is moving on the surface of the physical object in the 3-dimensional space so as to identify the where the position indicator is moving for user interaction. As per claim 12 Petill et al discloses: The method of claim 8, further comprising: determining whether the switch of the position indicator is in the closed postion, based on the one or more signals indicative of the plurality of positions of the switch of the position indicator, wherein the obtaining of the coordinates of the input gesture is responsive to determining that the switch of the position indicator in the closed position { figures 6-10 & [0055] and [0070] Note: where the user moves virtual object (56A) to living room table (30E) based on the user gesture input}. As per claim 13 Petill et al discloses: The method of claim 12, further comprising: determining whether the switch of the position indicator is released, based on the one or more signals indicative of the plurality of positions of the switch of the position indicator, wherein the obtaining of the coordinates of the input gesture is ended responsive to determining that the switch of the position indicator is released. { figures 6-10 & [0055] and [0070] Note: where the user moves virtual object (56A) to living room table (30E) based on the user gesture input}. As per claim 14 Petill et al discloses: The method of claim 8 wherein: the position indicator includes a plurality of reference tags, and the one or more signals indicative of the plurality of spatial positions of the position indicator are indicative of a plurality of positions of the reference tags. {figures 6-10 & [0055]-[0077] Note: where the physical objects (30E), (30F), virtual objects (56A) and the position indicator (78) have tags } As per claim 15 Petill et al discloses: The method of claim 14 wherein: each of the reference tags includes a visually distinct pattern formed thereon {figures 6-10 & [0055]-[0077] Note: where the physical objects (30E), (30F) and virtual objects (56A) have different reference tags }, the one or more signals indicative of the plurality of spatial positions of the position indicator include image data corresponding to a plurality of images of the references tags. {figures 6-10 & [0055]-[0077] Note: where the position indicator (78) has a text message as a tag } As per claim 16 Petill et al discloses: The method of claim 14 wherein: each of the reference tags emits light, the one or more signals indicative of the plurality of spatial positions of the position indicator include image data corresponding to a plurality of images of the references tags. {figures 6-10 & [0055]-[0077] Note: where the tags emit light as they are displayed on HMD device (24) and include image data as they have text labels} As per claim 17 Petill et al discloses, insofar as the claims are definite and understood: A system comprising: one or more receivers 18 & 20 which, in operation, receive one or more signals indicative of a plurality of spatial positions of a position indicator in a 3-dimensional space, and one or more signals indicative of a surface of a physical object 30E in the 3-dimensional space {figures 6, 15, and 16 & [0077] As another example, the natural language input may be a text input received via a virtual keyboard or another type of GUI element that the user may interact with via gestures detected by the camera device 18. See also [0086]}; one or more processors 12 {figure 1} coupled to the one or more receivers 18 & 20 {figures 1 & 2}; and one or more memory 14 & 16 {figure 1} devices coupled to the one or more processors 12, the one or more memory 14 & 16 devices storing instructions that, when executed by the one or more processors 12, cause the system to: in a pattern that specifies a portion of the surface of the physical object 30E based on the one or more signals indicative of the plurality of spatial positions of the position indicator and the one or more signals indicative of the surface of the physical object 30E {figures 4 and 16 & [0047] and [0084] At 206, the method 200 may include recognizing the physical object based on a trained artificial intelligence machine learning model. In one example, the object detected at step 204 may be processed by the trained artificial intelligence learning model 40 to recognize the detected object based on different factors or characteristics of the object, such as, for example, geometric shape, size, color, surface texture, etc. Note: where in step 206 the physical object is recognized using object recognition in step (3) of figure 4}; determine whether the position indicator is on or over the portion of the surface of the physical object 30E specified by the pattern in which the position indicator is moved based on the plurality of coordinates of the position indicator obtained while the position indicator is moved on the surface of the physical object 30E and the one or more signals indicative of the plurality of spatial positions of the position indicator {figures 6 and 16 & [0086] At 210, the method 200 may include receiving a user input directed to the detected physical object in the physical environment, the user input including a user specified semantic tag. An example user input is illustrated in FIG. 6. In one example, the user input may be detected as being directed to a particular object based on a user indicated direction that may be determined based on a detected gaze direction and/or a detected gesture input. See also [0055]}; responsive to determining that the position indicator is on or over the portion of the surface of the physical object 30E, obtain coordinates of an input gesture based on the one or more signals indicative of the plurality of spatial positions of the position indicator; and store the coordinates of the input gesture {figure 16 & [0086] At 212, the method 200 may include associating the detected physical object with the user specified semantic tag in the database. Both user specified semantic tags and artificial intelligence generated semantic tags may be associated with a particular physical object and stored in the database. In this manner, the database may be continuously updated in real-time according to the steps of method 200. } . Regarding claim 17 Petill et al is silent as to: obtain a plurality of coordinates of the position indicator while the position indicator is moved on the surface of the physical object in the 3-dimensional space . . . wherein the portion of the surface of the physical object is smaller than the surface of the physical object; receiving a signal indicating that the portion of the surface of the physical object is an input surface in response a mechanical actuation of a switch on the position indicator; after the signal indicating that the portion of the surface of the physical object is the input surface. With respect to claim 17 Rhodes et al discloses: obtain a plurality of coordinates of the position indicator while the position indicator is moved on the surface of the physical object in the 3-dimensional space {[0154] By using depth mapping or other three-dimensional modeling method, the augmented reality composition system 110 determines a three-dimensional coordinate location of the writing device 106 on the real-world surface.} . . . wherein the portion of the surface of the physical object is smaller than the surface of the physical object; receiving a signal indicating that the portion of the surface of the physical object is an input surface in response a mechanical actuation of a switch on the position indicator; after the signal indicating that the portion of the surface of the physical object is the input surface is received {[0061] Specifically, the augmented reality composition system 110 tracks pressure, location, orientation, and movement of the writing device 106 as the user 112 moves the writing device 106 along the real-world surface 202. } It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the system of Petill et al to obtain a plurality of coordinates of the position indicator while the position indicator is moved on the surface of the physical object in the 3-dimensional space as taught by Rhodes et al. The rationale is as follow: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a system to obtain coordinates of the position indicator while the position indicator is moving on the surface of the physical object in the 3-dimensional space so as to identify the where the position indicator is moving for user interaction. As per claim 18 Petill et al discloses: The system of claim 17 wherein the one or more memory 14 & 16 devices store instructions that, when executed by the one or more processors 12, cause the system to display a virtual representation of the position indicator along with a virtual representation of the portion of the surface of the physical object 30E. {figure 8 & [0067] FIG. 8 illustrates another example disambiguation technique. Based on determining that the confidence value for the selection of the target virtual object 72 or target physical object 74 is lower than the threshold value, the HMD device 24 may be configured to present a query 78 to the user for a user confirmation 80 of the selection of the target virtual object 72 or the target physical object 74. In the illustrated example, the query 78 is presented as the text “Did you mean this table 17” in a virtual text box located near the selected target physical object 74 of the first table physical object 30E. } As per claim 19 Petill et al discloses: The system of claim 17 wherein the one or more memory 14 & 16 devices store instructions that, when executed by the one or more processors 12, cause the system to: obtain an indication of a scaling factor {figure 1, [0061] In one example, to identify the user specified operation 64, the natural language processing module 42 may be configured to identify a verb in the natural language input 62, and compare the identified verb to the list of operations for virtual object 68. For example, the verbs “move”, “transfer”, “put”, etc., may each be associated with a move operation. As another example, the verbs “open”, “show”, “start”, etc., may each be associated with the application start operation. As yet another example, the verbs “modify”, “resize”, “change”, etc., may each be associated with the modification operation. It should be appreciated that the specific operations and associated verbs/terms discussed above are merely exemplary, and that other types of operations may be performed by the virtual object handling module 70, and each operation may be associated with other verbs/terms identified by the natural language processing module 42. As a few other non-limiting examples, the operations 68 may further include a copy operation, a deletion operation, a resizing operation, a fitting operation, etc. }; and obtain coordinates of a scaled input gesture based on the scaling factor and the coordinates of the input gesture. {figures 6 and 16 & [0086]-[0087] Note: where in steps (210, 212) an input gestures is received and resizing is performed on the virtual object (68)} As per claim 20 Petill et al discloses: The system of claim 19 wherein the one or more memory 14 & 16 devices store instructions that, when executed by the one or more processors 12, cause system to display a virtual representation of the scaled input gesture. {figures 6 and 16 & [0086]-[0087] Note: where in steps (210, 212) an input gestures is received and resizing is performed on the virtual object (68)} Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Petill et al (2021/0012113) in view of Rhodes et al (US 2019/0096129), as applied to claims 1-8 and 12-20 above, and in further view of Bharti et al (US 2019/0033780). Regarding claim 9 Petill et al is silent as to: displaying the scaling factor. With respect to claim 9 Bharti et al discloses: displaying the scaling factor. {figure 2 & [0026] According to embodiments, the feel and reaction of each individual holographic object 12 in response an applied force may be proportional (e.g., on a scale) to its real world counterpart, thereby providing a rich and immersive experience to the user 24. The applied force may be by scaled, for example based on a scaling factor 44 (e.g., 1/100, 1/10, etc.) to provide a scaled force, wherein the haptic effect of the holographic object 12 at the contact point 42 is adjusted to correspond to the scaled force. When created, each different holographic object 12 will be uniquely assigned a specific range of haptic effects corresponding to a specific range of applied forces, including a maximum and minimum force. In this way, just as in the real world, each different holographic object 12 will react in a distinct manner in response to a given applied force. As depicted in FIG. 2, the scaling factor 44 may be displayed adjacent the holographic object 12. } It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the method of Petill et al with displaying the scaling factor as taught by Bharti et al. The rationale is as follow: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a method with displaying the scaling factor so that the user would be able to see it so as to determine the amount of force to apply to the virtual object. See [0026] of Bharti et al. Regarding claim 10 Petill et al is silent as to: receiving a signal indicative of a pressure applied to a part of the position indicator, wherein the scaling factor is based on the signal indicative of the pressure applied to the part of the position indicator. With respect to claim 10 Bharti et al discloses: receiving a signal indicative of a pressure applied to a part of the position indicator, wherein the scaling factor is based on the signal indicative of the pressure applied to the part of the position indicator. { figure 2 & [0024]-[0026] Note: where the applied force to object (12) is proportional to the scaling factor} It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the method of Petill et al with receiving a signal indicative of a pressure applied to a part of the position indicator, wherein the scaling factor is based on the signal indicative of the pressure applied to the part of the position indicator as taught by Bharti et al. The rationale is as follow: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a method with receiving a signal indicative of a pressure applied to a part of the position indicator, wherein the scaling factor is based on the signal indicative of the pressure applied to the part of the position indicator in order to provide the user a proportional haptic feedback from the force applied to scaled object 13. See [0024]-[0026] of Bharti et al. Regarding claim 11 Petill et al is silent as to: receiving a signal indicative of an acceleration of the position indicator, wherein the scaling factor is based on the signal indicative of the acceleration of the position indicator. With respect to claim 11 Bharti et al discloses: receiving a signal indicative of an acceleration of the position indicator, wherein the scaling factor is based on the signal indicative of the acceleration of the position indicator. {figures 2 & [0024]-[0026] Note: where the applied force, which is mass times acceleration, to object 12 is proportional to the scaling factor.} It would have been obvious to a person having ordinary skill in the art at the time the invention was effectively filed to provide the method of Petill et al with receiving a signal indicative of an acceleration of the position indicator, wherein the scaling factor is based on the signal indicative of the acceleration of the position indicator as taught by Bharti et al. The rationale is as follow: one of ordinary skill in the art at the time the invention was effectively filed would have been motivated to provide a method with receiving a signal indicative of an acceleration of the position indicator, wherein the scaling factor is based on the signal indicative of the acceleration of the position indicator in order to provide the user a proportional haptic feedback from the force applied to scaled object 13. See [0024]-[0026] of Bharti et al. Response to Arguments Applicant's arguments filed November 19, 2025 have been fully considered but they are not persuasive. Applicant asserts in the third full paragraph the following: Notably, Rhodes fails to teach or suggest that a portion of the real-world surface 202 is specified using the writing device 106. Nothing has been found, or pointed to, in Rhodes which teaches or suggests that the writing device 106 is moved on the real-world surface 202 in a pattern that specifies a portion of the real-world surface 202. Contrary to applicant’s assertion, figures 2 and 3 of Rhodes et al clearly show “the writing device 106 is moved on the real-world surface 202 in a pattern that specifies a portion of the real-world surface 202”. In the ultimate paragraph on page 4 applicant asserts the following: The Office is understood to assert that the eraser 303 taught by Rhodes teaches the "switch" recited in claim 1. Applicant respectfully disagrees. Notably, Rhodes fails to teach or suggest "a switch". Paragraph [0082] of Rhodes teaches that the augmented reality composition system 110 detects variations of pressure applied by the user 112 to depress the eraser 303 on the back end of the writing device 106 into the pressure sensor 301. Notably, Rhodes fails to teach or suggest that the eraser 303 and/or the pressure sensor 301 is "a switch". This assertion is curious at best. A sensor is a switch. Additionally, paragraph [0082] or Rhodes et al, which applicant referenced, discloses Similar to how the augmented reality composition system 110 detects pressure applied to the pressure sensor 302 of the writing device 106 to add digital marks, the augmented reality composition system 110 further detects pressure applied to the pressure sensor 301 of the writing device 106 to remove digital marks. To illustrate, the augmented reality composition system 110 detects variations of pressure applied by the user 112 to depress the eraser 303 on the back end of the writing device 106 into the pressure sensor 301. In other words, because of switch 303 of Rhodes et al, there a shifting or changing, which is the definition of switch, based on the amount and the variation of pressure applied to switch 303. Applicant continues on page 6 in the fourth full paragraph with the following: Notably, Rhodes fails to teach or suggest "scaling" and/or "a scaling factor". Paragraph [0061] of Rhodes teaches that the augmented reality composition system 110 tracks pressure, location, orientation, and movement of the writing device 106 as the user 112 moves the writing device 106 along the real-world surface 202. Nothing has been found, or pointed to, in Rhodes which teaches or suggests that the pressure mentioned in paragraph [0061] corresponds to "a scaling factor". Contrary to applicant’s assertion, Rhodes et al discloses that which is claimed: “a scaling factor based on an amount of pressure applied to a tip of a core body that extends from an opening formed in a case of the position indicator” Rhodes et al discloses the claimed limitation in [0061] “Specifically, the augmented reality composition system 110 tracks pressure, location, orientation, and movement of the writing device 106 as the user 112 moves the writing device 106 along the real-world surface 202.“ Therefore, contrary to applicant’s assertions the applied prior art discloses the claimed invention. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID D DAVIS whose telephone number is (571)272-7572. The examiner can normally be reached Monday - Friday, 8 a.m. - 4 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached on 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID D DAVIS/Primary Examiner, Art Unit 2627 ddd
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Jun 24, 2023
Non-Final Rejection — §103
Sep 27, 2023
Response Filed
Oct 02, 2023
Final Rejection — §103
Dec 05, 2023
Response after Non-Final Action
Dec 13, 2023
Response after Non-Final Action
Dec 13, 2023
Applicant Interview (Telephonic)
Jan 03, 2024
Request for Continued Examination
Jan 13, 2024
Response after Non-Final Action
Apr 18, 2024
Non-Final Rejection — §103
Jul 30, 2024
Response Filed
Nov 01, 2024
Final Rejection — §103
Jan 06, 2025
Request for Continued Examination
Jan 13, 2025
Response after Non-Final Action
Mar 18, 2025
Non-Final Rejection — §103
Jun 20, 2025
Response Filed
Sep 17, 2025
Final Rejection — §103
Nov 19, 2025
Response after Non-Final Action
Dec 19, 2025
Request for Continued Examination
Jan 16, 2026
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602106
Ambience-Driven User Experience
2y 5m to grant Granted Apr 14, 2026
Patent 12602128
DISPLAY DEVICE HAVING PIXEL DRIVE CIRCUITS AND SENSOR DRIVE CIRCUITS
2y 5m to grant Granted Apr 14, 2026
Patent 12602121
TOUCH DEVICE FOR PASSIVE RESONANT STYLUS, DRIVING METHOD FOR THE SAME AND TOUCH SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12596265
Aiming Device with a Diffractive Optical Element and Reflective Image Combiner
2y 5m to grant Granted Apr 07, 2026
Patent 12592178
Display Device Including an Electrostatic Discharge Circuit for Discharging Static Electricity
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
70%
Grant Probability
79%
With Interview (+9.1%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 900 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month