Prosecution Insights
Last updated: April 19, 2026
Application No. 17/933,020

DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR CONTENT APPLICATIONS

Final Rejection §103
Filed
Sep 16, 2022
Examiner
KEATON, SHERROD L
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
4 (Final)
52%
Grant Probability
Moderate
5-6
OA Rounds
4y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
295 granted / 563 resolved
-2.6% vs TC avg
Strong +36% interview lift
Without
With
+36.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
32 currently pending
Career history
595
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
62.0%
+22.0% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 563 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the filing of 2-10-2026. Claims 1, 3-23 and 93 are pending and have been considered below: Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-6, 10, 12-14, 17, 19, 21-23 and 93 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mak (20190371279 A1) in view of Rogers et al. (“Rogers” 20140114845 A1), WO 2022055821 A1 “Sterling” and FOCUS: A Usable and Effective approach to OLED display power management, Wee et al. (“Wee”) Pages 573-581, 9-2013. Claim 1: Mak discloses method comprising: at an electronic device in communication with a display generation component and one or more input devices (Paragraphs 72-73; “content that is generated by a client (e.g., user) on any device the client owns/uses (e.g., client devices such as mobile device, tablet, camera, head-mounted display device”): while presenting a content item in a three-dimensional environment, displaying, via the display generation component (Paragraphs 7 and 139; 3d environment), Mak discloses capability to play/pause content (Paragraph 276; user provided capability to play/pause content) however may not explicitly disclose a user interface associated with the content item, wherein the user interface includes one or more user interface elements for modifying playback of the content item. Rogers is provided because it discloses a virtual reality environment in a display (Paragraph 46), the system also allows for the adjustment of presentation within a 3d environment (Paragraph 122). Within this environment content is presented with interface elements (buttons) which will control playback (Figure 8, Paragraphs 107 and 123). Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide playback controls with the content presentation of Mak. One would have been motivated to provide the playback elements as an explicit interaction method eliminating possible erroneous inputs. Mak also may not explicitly disclose a respective user interface element for modifying a virtual lighting effect that affects an appearance of a region of the three-dimensional environment other than the content item; while displaying the user interface associated with the content item, receiving, via the one or more input devices, a first set of one or more inputs including a first user input directed to the respective user interface element, the first set of one or more inputs corresponding to a request to modify the virtual lighting effect; and in response to receiving the first set of one or more inputs; continuing to present the content item in the three-dimensional environment; and applying the virtual lighting effect to the region of the three-dimensional environment other than the content item without applying the virtual lighting effect to the content item while presenting the content item in the three-dimensional environment and while applying the virtual lighting effect to the region of the three-dimensional environment other than the content item without applying the virtual lighting effect to the content item, Sterling is provided because it discloses an environment which provides a lighting effect around a content area, the capability for implementing the effect can be triggered through different inputs (Paragraph 58) and additionally the effect can be activated through graphical elements within the environment (Paragraph 61; button/pop-up/slider). Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide presentation modification from inputs within Mak. One would have been motivated to provide the presentation modification from inputs in order to explicitly control adjustments for enhanced user interface interactions. Mak also may not explicitly disclose receiving, via the one or more input devices, a second set of one or more inputs including a second input directed to the user interface that includes the one or more user interface elements for modifying playback of the content item; and in response to receiving the second set of one or more inputs: continuing to present the content item in the three-dimensional environment; and ceasing to apply the virtual lighting effect to the region of the three-dimensional environment other than the content item. Wee is provided because it discloses an environment which provides a lighting effect around content area (Figure 2) and further provides capability to remove the lighting effect once an input is provided to a content playback functionality (Page 576, Default Profile: Paragraph 2; remove focus during scroll bar input). The capability of providing and removing the lighting effect can be utilized in the modified Mak. Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide presentation modification within Mak. One would have been motivated to provide the presentation for focus control of the content. Claim 3: Mak, Rogers, Sterling and Wee disclose a method of claim 1, wherein: before receiving the first set of one or more inputs directed to the respective user interface element, the electronic device displays, via the display generation component, the region of the three-dimensional environment other than the content item with a first amount of the virtual lighting effect, and in response to receiving the first set of one or more inputs, the electronic device displays, via the display generation component, the region of the three-dimensional environment other than the content item with a second amount the virtual lighting effect, the second amount different from the first amount (Mak: Figure 3a-b, Paragraph 130; first presentation which can be adjusted to varying schemes and Sterling: Figures 3-5 and Paragraphs 40 and 61; modified area other than content can adjust immersion area). Claim 4: Mak, Rogers, Sterling and Wee disclose a method of claim 1, wherein: before receiving the first set of one or more inputs, the region of the three-dimensional environment that does not include the content item is displayed with a first level of brightness, and displaying the three-dimensional environment with the virtual lighting effect in response to the first set of one or more inputs includes displaying the region of the three-dimensional environment that does not include the content item with a second level of brightness, different from the first level of brightness (Mak: Figure 3a-b, Paragraph 130; presentation (darkness/levels of brightness) adjusted based on input and Sterling: Figures 3-5 and Paragraph 61; modified area levels can be changed with knob or slider). Claim 5: Mak, Rogers, Sterling and Wee disclose a method of claim 1, wherein displaying the region of the three-dimensional environment other than the content item with the virtual lighting effect in response to the first set of one or more inputs includes displaying a respective virtual lighting effect emanating from the content item on one or more objects in the three-dimensional environment (Mak: Figure 3a-b, Paragraph 130; by allowing presentation adjustment to area and content (darkness/levels of brightness), a brighter color could emanate from the content and Sterling: Figures 3-5 and Paragraphs 37, 40 and 61; modified area levels of intensity and type of immersion can be adjusted). Claim 6: Mak, Rogers, Sterling and Wee disclose a method of claim 1, wherein: before receiving the first set of one or more inputs directed to the respective user interface element, the electronic device displays, via the display generation component, the three-dimensional environment with a first amount of the virtual lighting effect, including displaying the region of the three-dimensional environment that does not include the content item with a first level of brightness and displaying a first amount of a respective virtual lighting effect emanating from the content item on one or more objects in the three-dimensional environment(Mak: Figure 3a-b, Paragraph 130; by allowing presentation adjustment to area and content (darkness/levels of brightness), a brighter color could emanate from the content). and in response to receiving the first set of one or more inputs, the electronic device displays, via the display generation component, the three-dimensional environment with a second amount of the virtual lighting effect, including displaying the region of the three-dimensional environment that does not include the content item with a second level of brightness and displaying a second amount of the respective virtual lighting effect emanating from the content item on the one or more objects in the three-dimensional environment (Mak: Figure 3a-b, Paragraphs 127 (depending on background contrast adjusted, which may include different levels of brightness) 130; by allowing presentation adjustment to area around content (darkness/levels of brightness), a brighter color could emanate or be reduced and Sterling: Figures 3-5 and Paragraphs 37, 40 and 61; modified area levels of intensity and type can be adjusted). Claim 10: Mak, Rogers, Sterling and Wee disclose a method of claim 1, further comprising: while the content item is playing: displaying, via the display generation component, the region of the three-dimensional environment other than the content item with the virtual lighting effect; and receiving, via the one or more input devices, a user input corresponding to a request to pause the content item; and in response to receiving the user input corresponding to the request to pause the content item: pausing the content item; and displaying, via the display generation component, the region of the three-dimensional environment other than the content item without the virtual lighting effect Paragraph (Mak: 276; play/pause capability and 130; once web content is paused user can adjust lighting effect and Rogers 104; playback elements and Sterling: Figures 3-5 and Paragraphs 37, 40 and 61; modified area levels of intensity and type can be adjusted)). Claim 12: Mak, Rogers, Sterling and Wee disclose a method of claim 1, further comprising: receiving, via the one or more input devices, a respective user input directed to a second respective user interface element of the one or more user interface elements for modifying playback of the content item; and in response to receiving the respective user input: in accordance with a determination that the second respective user interface element is a user interface element that, when selected, causes the electronic device to modify a volume of audio content of the content item, modifying the volume of the audio content in accordance with the respective user input (Rogers: Figure 8 and Paragraph 123; the selection of elements within the environment provide additional secondary elements for controlling volume which would include content playback Paragraph 107; videos for playback). Claim 13: Mak, Rogers, Sterling and Wee disclose a method of claim 1, wherein the user interface associated with the content item is a separate user interface from the content item and is displayed, via the display generation component, between the content item and a viewpoint of a user of the electronic device in the three-dimensional environment (Mak: Figure 20o and Paragraphs 294-295; content item is an interface and the extracted item for interaction represents an interface and Rogers: Figures 8 (audio interface) and 11(content interface) and Paragraph 123; provides separate volume control interface). Claim 14: Mak, Rogers, Sterling and Wee disclose a method of claim 13, wherein: the content item is displayed, via the display generation component, at a first angle relative to a viewpoint of a user in the three-dimensional environment, and the user interface associated with the content item is displayed, via the display generation component, at a second angle, different from the first angle, relative to the viewpoint of the user in the three-dimensional environment (Mak: Figure 20o; provides different content elements at different angles and Paragraphs 117; content elements are presented at different angles Paragraph 254; web controls are presented with page). Claim 17: Mak, Rogers, Sterling and Wee disclose a method of claim 1, further comprising: while displaying, via the display generation component, the content item at a first size and the user interface associated with the content item at a second size, receiving, via the one or more input devices, an input corresponding to a request to resize the content item; and in response to receiving the input corresponding to the request to resize the content item: displaying, via the display generation component, the content item at a third size, different from the first size, in accordance with the input corresponding to the request to resize the content item, and displaying, via the display generation component, the user interface associated with the content item at the second size (Rogers: Paragraphs 103 and 107; provides interface elements and further provides user with options to resize a page from originally presentation (content item) in the environment and Sterling: Figures 3b-3 and Paragraph 47). Claim 19: Mak, Rogers, Sterling and Wee disclose a method of claim 1, wherein the content item is separate from the user interface associated with the content item in the three-dimensional environment, and the method further comprises: displaying, via the display generation component, one or more second user interface elements for modifying playback of the content item, the one or more second user interface elements displayed overlaid on the content item in the three-dimensional environment (Mak: Figure 20o:254(object for interaction) and 21:2122 (audio icons); provides interface/graphical presentation associated with content overlaid Paragraph 254; web control elements (different types buttons/handles) can be displayed on the page; Rogers: Figure 8 and Paragraph 123; provides interface for playback control overlaid content and Sterling: Paragraph 55). Claim 21: Mak, Rogers, Sterling and Wee disclose a method of claim 1, further comprising: displaying, via the display generation component, a respective user interface element displayed separately from the content item and the user interface associated with the content item; while displaying the respective user interface element, receiving, via the one or more input devices, an input directed to the respective user interface element; and in response to detecting the input directed to the respective user interface element, initiating a process to resize the content item in the three-dimensional environment in accordance with the input directed to the respective user interface element (Rogers: Paragraphs 103 and 107; provides interface elements and further provides user with options to resize a page from originally presentation (content item) in the environment and Sterling: Figure 4b and Paragraph 51; size of element changed). Claims 22 and 23 are similar in scope to claim 1 and therefore rejected under the same rationale (Paragraph 300; processor, memory and CRM). Claim 93: Mak, Rogers, Sterling and Wee disclose a method of claim 1, wherein the region of the three-dimensional environment other than the content item includes a representation of a physical environment of the electronic device, and applying the virtual lighting effect to the region of the three-dimensional environment other than the content item includes applying the virtual lighting effect to the representation of the physical environment of the electronic device (Sterling: Figures 3-5). Claims 7 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mak (20190371279 A1), Rogers et al. (“Rogers” 20140114845 A1), WO 2022055821 A1 “Sterling” and FOCUS: A Usable and Effective approach to OLED display power management, Wee et al. (“Wee”) Pages 573-581, 9-2013 in further view of (Herz et al. “Herz” 11343420 B1). Claim 7: Mak, Rogers, Sterling and Wee disclose a method of claim 1, but may not explicitly disclose wherein displaying the region of the three-dimensional environment other than the content item with the virtual lighting effect in response to the first set of one or more inputs includes: in accordance with detecting, via the one or more input devices, that an attention of a user of the electronic device is directed to a first respective region of the three-dimensional environment, displaying, via the display generation component, the three-dimensional environment with a first amount of the virtual lighting effect; and in accordance with detecting, via the one or more input devices, that the attention of the user is directed to a second respective region, different from the first respective region, of the three-dimensional environment, displaying, via the display generation component, the region of the three-dimensional environment other than the content item with a second amount of the virtual lighting effect different from the first amount (Sterling: Figures 3-5). Herz is provided because it discloses an augmented/virtual reality environment (Column 5, Lines 47-57), the system also allows for the adjustment of lighting effects by providing highlighting around an object within the environment based on attention/focus determination, additional the effect is removed once focus stops (Column 15, Lines 55-62). The focus determination could be incorporated with the functionality of Mak in order to add to the current methods which trigger lighting effects (Paragraph 130). Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and utilize attention determination as an input option for lighting effect in Mak. One would have been motivated to provide the input option because it expands the capabilities currently offered for a more comprehensive system. Claim 20: Mak, Rogers, Sterling and Wee disclose a method of claim 1, but may not explicitly disclose further comprising: detecting, via the one or more input devices, that an attention of a user directed to a respective user interface element of the one or more user interface elements satisfies one or more first criteria; and in response to detecting that the attention of the user directed to the respective user interface element satisfies the one or more first criteria: in accordance with a determination that the respective user interface element satisfies one or more second criteria, displaying, via the display generation component, a visual indication identifying a functionality of the respective user interface element; and in accordance with a determination that the respective user interface element does not satisfy the one or more second criteria, forgoing display of the visual indication identifying the functionality of the respective user interface element. Herz is provided because it discloses an augmented/virtual reality environment (Column 5, Lines 47-57), the system also allows for the adjustment of lighting effects by providing highlighting around an object within the environment based on attention/focus determination, if a certain time threshold is also met (Column 15, Lines 55-62). The focus determination could be incorporated with the functionality of Mak in order to add to the current methods which trigger lighting effects (Paragraph 130). Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and utilize attention determination as an input option for lighting effect in Mak. One would have been motivated to provide the input option because it expands the capabilities currently offered for a more comprehensive system. Claims 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mak (20190371279 A1), Rogers et al. (“Rogers” 20140114845 A1), WO 2022055821 A1 “Sterling” and FOCUS: A Usable and Effective approach to OLED display power management, Wee et al. (“Wee”) Pages 573-581, 9-2013 in further view of LeGendre et al. (“LeGendre” 20210166437 A1). Claim 8: Mak, Rogers, Sterling and Wee disclose a method of claim 1, first set of one or more inputs that includes detecting, via the one or more input devices, a predefined portion of a user of the electronic device in a predefined pose for less than a predetermined time threshold; and the second set of inputs includes detecting, via the one or more input devices, the predefined portion of the user of the electronic device in the predefined pose for less than the predetermined time threshold; (Mak: Paragraph 14; provides a pose determined threshold which causes a 3d presentation reaction Paragraph 130; provides lighting effect adjustment based on a determined input and Sterling: Paragraphs 24, 56 and 59-60; multiple inputs consist of multiple gestures (poses) with time thresholds). However Mak may not explicitly disclose the pose position determining the lighting effect). LeGendre is provided because it discloses an augmented/virtual reality environment (Paragraph 10), the system also allows for the adjustment of lighting effect within the environment based on pose information (Paragraphs 124-125) while utilizing time threshold to aid in the determination (Paragraph 48). Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide pose capture as an input option for lighting effect in Mak. One would have been motivated to provide the input option because it expands the pose input capabilities currently offered for a more comprehensive system. Claim 9: Mak, Rogers, Sterling and Wee disclose a method of claim 1, further comprising: while displaying the region of the three-dimensional environment other than the content item with a first amount of the virtual lighting effect, receiving, via the one or more input devices, an input directed to the respective user interface element that includes detecting, via the one or more input devices, movement of a predefined portion of a user of the electronic device while the predefined portion of the user is in a predefined pose; and in response to the input directed to the respective user interface element, displaying, via the display generation component, the region of the three-dimensional environment other than the content item with a second amount of the virtual lighting effect, wherein the second amount is based on the movement of the predefined portion of the user while the predefined portion of the user is in the predefined pose (Mak: Paragraph 14; provides a pose determined threshold which causes a 3d presentation reaction Paragraph 130; provides lighting effect adjustment based on a determined input and Sterling: Paragraphs 24, 56 and 59-60; multiple inputs consist of multiple gestures (poses) with time thresholds). However Mak may not explicitly disclose the pose position determining the lighting effect. LeGendre is provided because it discloses an augmented/virtual reality environment (Paragraph 10), the system also allows for the adjustment of lighting effect within the environment based on pose information (Paragraphs 124-125). Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide pose capture as an input option for lighting effects in Mak. One would have been motivated to provide the input option because it expands the pose input capabilities currently offered for a more comprehensive system. Claim 11 is/are rejected under 35 U.S.C. 103 as being unpatentable Mak (20190371279 A1), Rogers et al. (“Rogers” 20140114845 A1), WO 2022055821 A1 “Sterling” and FOCUS: A Usable and Effective approach to OLED display power management, Wee et al. (“Wee”) Pages 573-581, 9-2013 in further view of Caron et al. (“Caron” 10852814 B1). Claim 11: Mak, Rogers, Sterling and Wee disclose a method of claim 1, further comprising: receiving, via the one or more input devices, a respective user input directed to a second respective user interface element of the one or more user interface elements for modifying playback of the content item; and in response to receiving the respective user input: in accordance with a determination that the second respective user interface element is a user interface element that, when selected, causes the electronic device to toggle between playing and pausing the content item, toggling a play or pause state of the content item (Mak: Paragraphs 28 and 276; play/pause capability to toggle between functions). But may not explicitly disclose and in accordance with a determination that the second respective user interface element is a user interface element that, when selected, causes the electronic device to update a playback position of the content item, updating the playback position of the content item in accordance with the respective user input. Caron is provided because it discloses an augmented/virtual reality environment (Column 3, Lines 20-37), the system also provides playback controls for content presented in the environment, additionally those playback controls provide options to update the content through fast forward and rewind (Figures 7a-b and Column 8, Line 55-Column 9, Line 19). Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide additional playback controls with the current controls found in Mak. One would have been motivated to provide the controls because it expands the interaction capabilities offering a more comprehensive system. Claims 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mak (20190371279 A1), Rogers et al. (“Rogers” 20140114845 A1), WO 2022055821 A1 “Sterling” and FOCUS: A Usable and Effective approach to OLED display power management, Wee et al. (“Wee”) Pages 573-581, 9-2013 in further view of Lacey et al. (“Lacey” 20190362557 A1). Claim 15: Mak, Rogers, Sterling and Wee disclose a method of claim 1, but may not explicitly disclose all features further wherein the electronic device displays the one or more user interface elements for modifying playback of the content item in response to detecting, via the one or more input devices, a predefined portion of a user of the electronic device in a pose that satisfies one or more criteria (Mak provides in Paragraphs 13-14; content display based pose, meeting a criteria (i.e. changing location) Paragraphs 114). Maks’ movement of windows can modify playback. Further, Lacey is provided because it discloses an augmented/virtual reality environment (Paragraph 93), the system also allows for the presentation of a menu pertaining to an application (Paragraph 386). This application could reasonably provide menu options for a video application (as found in Paragraph 175), these menu options could be incorporated with playback controls found in Mak. Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide pose capture as an input option for presenting settings in Mak. One would have been motivated to provide the output option because it expands the pose input capabilities currently offered for a more comprehensive system. Claim 16: Mak, Rogers, Sterling, Wee and Lacey disclose a method of claim 15, further comprising: while displaying the one or more user interface elements for modifying playback of the content item, detecting, via the one or more input devices, the predefined portion of the user in a pose that does not satisfy the one or more criteria; and in response to detecting the predefined portion of the user in the pose that does not satisfy the one or more criteria, reducing a visual prominence with which the electronic device displays, via the display generation component, of the one or more user interface elements for modifying playback of the content item (Mak: Paragraph 14; pose changes visual prominence by moving it within the field of view and Paragraph 280, head pose changes beyond threshold (criteria) playback paused which affects visual presentation and Lacey: Paragraph 386; requires certain pose and actions/criteria to provide menu (providing a form of visual prominence)). Claim 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mak (20190371279 A1), Rogers et al. (“Rogers” 20140114845 A1), WO 2022055821 A1 “Sterling” and FOCUS: A Usable and Effective approach to OLED display power management, Wee et al. (“Wee”) Pages 573-581, 9-2013 in further view of Foxlin (20020024675 A1). Claim 18: Mak, Rogers, Sterling and Wee disclose a of claim 1, further comprising: while displaying, via the display generation component, the content item at a first size and first distance from a viewpoint of a user in the three-dimensional environment and the user interface associated with the content item at a second size and second distance from the viewpoint of the user in the three-dimensional environment, receiving, via the one or more input devices, an input corresponding to a request to reposition the content item in the three-dimensional environment; and in response to receiving the input corresponding to the request to reposition the content item: displaying, via the display generation component, the content item at a third size at a third distance, different from the first distance, from the viewpoint of the user in the three- dimensional environment in accordance with the input corresponding to the request to reposition the content item; and displaying, via the display generation component, the user interface associated with the content item at the second size at a fourth distance from the viewpoint of the user in accordance with the input corresponding to the request to reposition the content item (Mak provides in Paragraph 172(provides content that is moved and resized to fit surface) also in Fig 20o and Paragraph 295-296; provides first and secondary linked windows (supplemental data/controls)). Further, Foxlin is provided because it discloses an augmented/virtual reality environment (Paragraph 79), the system also allows for a user to initiate of a movement of a window and change size (Paragraph 105). This functionality can be applied to the linked windows of Mak Fig. 20o to enhance the manipulation within the environment. Therefore it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to use a known technique to improve a similar device in the same way and provide size and distance manipulation within the environment of Mak. One would have been motivated to provide the input option because it expands the interaction capabilities currently offered for a more comprehensive system. Response to Arguments Applicant's arguments have been fully considered and Sterling and Wee now incorporated provide the amended claim limitations. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. 20160379418 A1 Figure 7 Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERROD L KEATON whose telephone number is (571)270-1697. The examiner can normally be reached on Monday-Friday from 9:30 am to 5 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MICHELLE BECHTOLD can be reached on 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /SHERROD L KEATON/Primary Examiner, Art Unit 2148 2-19-2026
Read full office action

Prosecution Timeline

Sep 16, 2022
Application Filed
May 24, 2023
Response after Non-Final Action
Jan 25, 2025
Non-Final Rejection — §103
Apr 24, 2025
Applicant Interview (Telephonic)
Apr 30, 2025
Response Filed
May 03, 2025
Examiner Interview Summary
May 16, 2025
Final Rejection — §103
Aug 19, 2025
Applicant Interview (Telephonic)
Aug 21, 2025
Examiner Interview Summary
Sep 04, 2025
Request for Continued Examination
Sep 10, 2025
Response after Non-Final Action
Sep 27, 2025
Non-Final Rejection — §103
Nov 25, 2025
Applicant Interview (Telephonic)
Nov 29, 2025
Examiner Interview Summary
Jan 27, 2026
Response Filed
Feb 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566823
SYSTEMS AND METHODS FOR INTERPOLATIVE CENTROID CONTRASTIVE LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12547820
Automated Generation Of Commentator-Specific Scripts
2y 5m to grant Granted Feb 10, 2026
Patent 12530587
SYSTEMS AND METHODS FOR CONTRASTIVE LEARNING WITH SELF-LABELING REFINEMENT
2y 5m to grant Granted Jan 20, 2026
Patent 12524147
Modality Learning on Mobile Devices
2y 5m to grant Granted Jan 13, 2026
Patent 12524603
METHODS FOR RECOGNIZING AND INTERPRETING GRAPHIC ELEMENTS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
52%
Grant Probability
88%
With Interview (+36.1%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 563 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month