Prosecution Insights
Last updated: April 19, 2026
Application No. 18/587,657

METHOD OF CUSTOMIZING AND DEMONSTRATING PRODUCTS IN A VIRTUAL ENVIRONMENT

Final Rejection §103
Filed
Feb 26, 2024
Examiner
POND, ROBERT M
Art Unit
3688
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Apple Inc.
OA Round
2 (Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
495 granted / 695 resolved
+19.2% vs TC avg
Strong +42% interview lift
Without
With
+42.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
20 currently pending
Career history
715
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
10.1%
-29.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 695 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment All pending claims 1-25 filed November 25, 2025 are examined in this final office action necessitated by amendment. Response to Arguments Applicant’s arguments, see remarks filed November 25, 2025 with respect to the rejections of claims under 35 USC 102/103, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made necessitated by amendment. Baughman was withdrawn in favor or Keeler (Niemeyer). All arguments are hinged on Baughman and are therefore rendered moot. 35 USC § 101-Subject Matter Eligibility All independent claims display a computer-generated environment. The instant specification has sufficient depth and application of an extended reality/three-dimensional environment (i.e. augmented reality) sufficient to overcome a rejection under Step 2B as adding significantly more to the judicial exception. For example, the electronic device can generate a virtual table and display the virtual table as table 304 in three-dimensional environment 300 to appear as if table 304 is physically in the room with the user. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 6, 9, 14, 17 and 22 are rejected under 35 USC 103 as being unpatentable over Keeler et al., US 2019/0266663 “Keeler” incorporating by reference in its entirety Niemeyer et al., US 2013/0297281 “Keeler (Niemeyer).” In Keeler see at least (underlined text is for emphasis): Regarding claim 1: A method, comprising: at an electronic device in communication with a display and one or more input devices: presenting, via the display, in a three-dimensional environment, a representation of an object, wherein one or more aspects of the object are configurable in response to receiving, via the one or more input devices, one or more user inputs; while presenting the representation of the object, receiving, via the one or more input devices, a user input corresponding to a request to present a demonstration of the object and a selection input directed to virtual content; and [Keeler: 0034] In one or more embodiments, methods and/or systems described herein can be utilized to create and/or implement a virtual or augmented reality environment for a three-dimensional store (e.g., an establishment that offers goods and/or services for sale and/or for rent). For example, a user (e.g., a customer) can utilize a head-mounted display with a three-dimensional viewing capability to view and/or interact with the virtual three-dimensional store. For instance, the head-mounted display can be coupled to a network (e.g., an Internet) and can access a computer system that implements the three-dimensional store via the network. In one or more embodiments, a personal computing device such as a tablet computer, a mobile smart phone, or a smart watch can serve as a surrogate for a head-mounted display. [Keeler: 0035] In one or more embodiments, a three-dimensional simulation can be based on a store layout. In one example, one or more CAD (computer aided design) files can store a brick-and-mortar store layout (e.g., a physical store layout). In a second example, one or more files can store a store layout that may not exist in a physical reality. In another example, one or more files can store one or more portions of a brick-and-mortar store layout and one or more portions of a store layout that may not exist in a physical reality. [Keeler: 0038] In one or more embodiments, a system can be configured to allow for virtual device interaction where a customer can interact with an actual operating system (e.g., a wireless telephone operating system, a tablet operating system, a music player operating system, a personal digital assistant operating system, etc.) in a manner as to obtain a “hands-on” experience of how a device will function prior to purchase. [Keeler: 0039] … For example, images can be incorporated into a head-mounted display that enables display of a virtual reality environment and allows a customer to interact with the virtual reality environment. [Keeler: 0055] In one or more embodiments, devices can be individually rotated, and independently from each other. For example, device 452 can be rotated independently from device 454. In one or more embodiments, customer 250 can interact with a device via “hotspots” 456. For example, a “hotspot” can be or include an area that can allow customer 250 to interact with the device via a mouse, handset, keyboard, wand, glove, voice, head-mounted display (e.g., movement of the head moving the head-mounted display) or other interaction device. For instance, customer 250 can interact with a hotspot (e.g., clicks with a mouse on the hotspot) to activate behavior indicated by the hotspot. [Keeler: 0073] Turning now to FIG. 8, a further detailed aspect of a virtual environment configured to interact with a device via a HMD is illustrated, according to one or more embodiments. In one or more embodiments, HMD 212 can receive user input from customer 250 that selects a device. For example, HMD 212 can receive user input from customer 250 that selects device 222 from among devices 220-222. In one or more embodiments, HMD 212 can receive user input from customer 250 that indicates one or more of an expanded view of a device and a rotation of the device, among others. For example, one or more “hotspots” associated with a display of device 222 can be selected that can expand a view of device 222, that can rotate device 222, etc. In one instance, HMD 212 can display device 222 via an expanded view 828. In another instance, HMD 212 can display device 222 via different display angles 830 and 832. [Keeler: 0074] In one or more embodiments, customer 250 can interact with a virtual device via a virtual machine. For example, customer 250 can interact with virtual device 222, and virtual device 222 can be executing on a virtual machine. For more information regarding a virtual device executing on a virtual machine, please refer to U.S. application Ser. No. 13/601,537, filed 31 Aug. 2012, titled “Methods and Systems of Providing Items to Customers Via a Network”. Please note: Niemeyer, application 13/601,537 (US 2013/0297281) is” incorporated by reference in its entirety as though fully and completely set forth herein,” see [Keeler: 0001]. In Keeler (Niemeyer) see at least: [Niemeyer: 0142] Turning now to FIGS. 6A-6C, exemplary diagrams of a simulated object are illustrated, according to one or more embodiments. As shown in FIG. 6A, simulated object 3050 can include one or more of a wireless telephone (e.g., a cellular telephone, a satellite telephone, a wireless Ethernet telephone, etc.), a digital music player, a tablet computing device, and a PDA, among others. As illustrated, object 3050 can include one or more of a simulated sound output device 6010, a simulated display 6020, and simulated buttons 6030-6032. [Niemeyer: 0143] As shown, simulated display 6020 can display one or more of a picture or graphic 6050 and one or more buttons or icons 6040-6046. In one or more embodiments, a customer (e.g., a user of a CCD) can select and/or actuate one or more of icons 6040-6045 and buttons 6030-6032, and simulated object 3050 can perform one or more simulated functions associated with a selection or simulation of a selected icon or button of object 3050. In one example, the customer can select button 6031, and a numeric keypad can be displayed via simulated display 6020. For instance, keys of the numeric keypad can simulate a keypad of a telephone. In a second example, the customer can select button 6032, and an interface to a digital music player can be displayed via simulated display 6020. In another example, an icon of icons 6040-6046 can be selected to simulate a respective application of a calculator application, a clock application, a calendar application, a web browser application, a video chat application, a video player (e.g., a motion picture player) application, and a setting or configuration application. Please note: The simulated mobile device, complete with simulated icons that are activated by user 250 using a virtual/augmented reality device, qualifies as being a configurable object. [Niemeyer 0145] As shown in FIG. 6B, simulated display 6020 can display a simulation of a video chat application. In one example, a simulated picture or graphic 6141 of a person with whom the customer is chatting can be displayed via simulated display 6020. In another example, a simulated picture or graphic 6142 of the customer can be displayed via simulated display 6020. For instance, picture or graphic 6142 of the customer can demonstrate a front-facing camera of simulated object 3050. In one or more embodiments, the simulation of the video chat application can be started and/or executed in response to a selection and/or actuation of button or icon 6044 of FIG. 6A. For example, the simulation of the video chat application can be a video (e.g., a motion picture) that can be played via simulated display 6020. Please note: Front facing camera is a sensor which is being simulated. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Keeler’s displayed icons on the simulated device illustrated in Fig. 8 (222, 828) with activatable simulated icons displayed on a simulated mobile device as taught and illustrated by Niemeyer Fig. 6A (6044-Video Chat), in order to simulate functionality of one or more sensors (e.g. front facing camera) of the object (e.g. mobile device) in a three-dimensional environment. For example, simulated object 3050 (Fig. 6A) displays various simulated icons. Selecting simulated icon 6044-Video Chat demonstrates a simulated Video Chat illustrated in Fig. 6B (3050). in response to receiving the user input corresponding to the request to present the demonstration of the object, providing, via the display, a demonstration associated with one or more features of the object in association with the virtual content, wherein providing the demonstration includes simulating operation of one or more sensors of the object in the three-dimensional environment. Rejection is based upon the teachings and rationale applied to claim 1 above by Keeler (Niemeyer). See above for simulated video chat demonstration involving one or more sensors, e.g. front facing camera. Regarding claims 9 and 17: Rejections of independent claims are based upon the teachings and rationale applied to claim 1 by Keeler (Niemeyer) and further upon Keeler (Niemeyer) pertaining to system computing elements (e.g. devices, processor(s), memory, etc.) see Keeler: Figs. 25A-D. Regarding claims 6, 14 and 22: Rejections are based upon the teachings and rationale applied to claims 1, 9 and 17 by Keeler (Niemeyer) regarding a demonstration associated with one or more aspects of the object configurable in response to receiving the one or more inputs. Please note: Simulated video chat is a response to one or more inputs that demonstrates a configurable feature, e.g. simulated front facing camera. Claims 2, 3, 7, 8, 10, 11, 15, 16, 18, 19, 23 and 24 are rejected under 35 USC 103 as being unpatentable over Keeler (Niemeyer) in view of Hauenstein et al., US 2019/0065027 “Hauenstein.” Regarding claims 2, 3, 10, 11, 18 and 19: Rejections are based in part upon the teachings applied to claims 1, 9 and 17 by Keeler (Niemeyer) and further upon the combination of Keeler (Niemeyer)-Hauenstein. Although the Keeler (Niemeyer) user interacts with a digital object in an augmented reality environment, Keeler (Niemeyer) do not expressly mention tracking a user’s hand movement in view of a camera within an augmented reality scene. Hauenstein on the other hand would have taught Keeler (Niemeyer) such techniques. In Hauenstein see at least: [Hauenstein: 0010] In accordance with some embodiments, a method is performed at a computer system with a display generation component and an input device. The method includes displaying, via the display generation component, a first virtual user interface object in a virtual three-dimensional space. The method also includes, while displaying the first virtual user interface object in the virtual three-dimensional space, detecting, via the input device, a first input that includes selection of a respective portion of the first virtual user interface object and movement of the first input in two dimensions. [Hauenstein: 0015] In accordance with some embodiments, a computer system includes (and/or is in communication with) a display generation component (e.g., a display, a projector, a heads-up display, or the like), one or more cameras (e.g., video cameras that continuously provide a live preview of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras), and one or more input devices (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as the display generation component, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands), [Hauenstein: 0050] In the discussion that follows, a computer system that includes an electronic device that has (and/or is in communication with) a display and a touch-sensitive surface is described. It should be understood, however, that the computer system optionally includes one or more other physical user-interface devices, such as a physical keyboard, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands. [Hauenstein: 0158] In some embodiments, computer system 301 includes and/or is in communication with: [Hauenstein: 0159] input device(s) (302 and/or 307, e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as the display generation component, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands); [Hauenstein: 0212] FIG. 5A2 illustrates an alternative method in which user 5002 views physical building model 5006 using a computer system that includes a headset 5008 and a separate input device 5010 with a touch-sensitive surface. In this example, headset 5008 displays the augmented reality environment and user 5002 uses the separate input device 5010 to interact with the augmented reality environment. In some embodiments, device 100 is used as the separate input device 5010. In some embodiments, the separate input device 5010 is a touch-sensitive remote control, a mouse, a joystick, a wand controller, or the like. In some embodiments, the separate input device 5010 includes one or more cameras that track the position of one or more features of user 5002 such as the user's hands and movement. [Hauenstein: 0374] FIGS. 6A-6D are flow diagrams illustrating method 600 of adjusting an appearance of a virtual user interface object in an augmented reality environment, in accordance with some embodiments. Method 600 is performed at a computer system (e.g., portable multifunction device 100, FIG. 1A, device 300, FIG. 3A, or a multi-component computer system including headset 5008 and input device 5010, FIG. 5A2) having a display generation component (e.g., a display, a projector, a heads-up display, or the like), one or more cameras (e.g., video cameras that continuously provide a live preview of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras), and an input device (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as the display generation component, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands). [Hauenstein: 0404] … For example, in some embodiments, the filter is gradually applied based on movement of the input and the appearance of the respective virtual user interface object is gradually adjusted based on the speed and/or distance of movement of the input (e.g., movement of a contact on a touch-sensitive surface, movement of a wand, or movement of a hand of the user in view of a camera of the computer system) (e.g., movement of contact 5030 on touch screen 112, FIGS. 5A21-5A24). [Hauenstein: 0413] FIGS. 8A-8C are flow diagrams illustrating method 800 of transitioning between viewing a virtual model in the augmented reality environment and viewing simulated views of the virtual model from the perspectives of objects in the virtual model, in accordance with some embodiments. Method 800 is performed at a computer system (e.g., portable multifunction device 100, FIG. 1A, device 300, FIG. 3, or a multi-component computer system including headset 5008 and input device 5010, FIG. 5A2) having a display generation component (e.g., a display, a projector, a heads-up display, or the like), one or more cameras (e.g., video cameras that continuously provide a live preview of at least a portion of the contents that are within the field of view of the cameras and optionally generate video outputs including one or more streams of image frames capturing the contents within the field of view of the cameras), and an input device (e.g., a touch-sensitive surface, such as a touch-sensitive remote control, or a touch-screen display that also serves as the display generation component, a mouse, a joystick, a wand controller, and/or cameras tracking the position of one or more features of the user such as the user's hands). In some embodiments, the input device (e.g., with a touch-sensitive surface) and the display generation component are integrated into a touch-sensitive display. As described above with respect to FIGS. 3B-3D, in some embodiments, method 800 is performed at a computer system 301 in which respective components, such as a display generation component, one or more cameras, one or more input devices, and optionally one or more attitude sensors are each either included in or in communication with computer system 301. One of ordinary skill in the art before the effective filing date would have recognized that applying the known techniques of Hauenstein, which track a user’s hand movement in view of a camera within an augmented reality scene, would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the techniques of Hauenstein to the teachings of Keeler (Niemeyer) would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such data processing features into similar systems. Obviousness under 35 USC 103 in view of the Supreme Court decision KSR International Co. vs. Teleflex Inc. Regarding claims 7, 8, 15, 16, 23 and 24: Rejections are based upon the teachings and rationale applied to the combination of Keeler (Niemeyer)-Hauensten and further upon the combination of Keeler (Niemeyer)-Hauenstein: [Hauenstein: 0224] FIGS. 5A25-5A27 illustrate changing the virtual environment setting for the augmented reality environment in response to an input (e.g., a tap input on a displayed button) that switches between different virtual environments for the virtual user interface object (e.g., virtual building model 5012), where different virtual environments are associated with different interactions for exploring the virtual user interface object (e.g., predefined virtual environments such as landscape view, interior view, day/night view). In FIG. 5A25, landscape button 5014 is selected, and the landscape view for virtual building model 5012 is displayed (e.g., with virtual trees, virtual bushes, a virtual person, and a virtual car). In FIGS. 5A26-5A27, device 100 detects an input on interior button 5016, such as a tap gesture by contact 5032, and in response, displays the interior view for virtual building model 5012 (e.g., with no virtual trees, no virtual bushes, no virtual person, and no virtual car, but instead showing an expanded view of virtual building model 5012 with virtual first floor 5012-d, virtual second floor 5012-c, virtual third floor 5012-b, and virtual roof 5012-a). In some embodiments, when the virtual environment setting is changed (e.g., to the interior view), the surrounding physical environment is blurred out (e.g., using a filter). For example, although not shown in FIG. 5A27, in some embodiments, wallpaper 5007 is blurred out when the virtual environment setting is changed to the interior view. Please note: a) Fig. 5A27 demonstrates how various components of the simulated build is constructed; and b) Contact 5032 represents the user’s hand/finger. Claims 4, 12, 20 and 25 are rejected under 35 USC 103 as being unpatentable over Keeler (Niemeyer) and Hauenstein as applied to claims 1, 3, 11 and 19 further in view of Sempe et al., US 11,250,617 “Sempe.” Rejections are based in part on the teachings and rationale applied to claims 1, 3, 11 and 19 by Keeler (Niemeyer)-Hauenstein and further upon the combination of Keeler (Niemeyer)-Hauenstein-Sempe. Although Keeler (Niemeyer)-Hauenstein simulate operation of one or more sensors of the object in the three-dimensional environment, Keeler (Niemeyer)-Hauenstein do not expressly mention displaying a simulation of the representation of the object taking a picture of a respective portion of the three-dimensional environment. Sempe on the other hand would have taught Keeler (Niemeyer)-Hauenstein such techniques. In Sempe see at least: (Sempe: Abstract: front page) A technology is described for capturing electronic images of a three-dimensional environment using a virtual camera. In one example of the technology, positioning data may be obtained for a camera control device in a physical environment. The positioning data may define a physical position and physical orientation of the camera control device in the physical environment. A virtual camera may be placed and controlled in a virtual three-dimensional (3D) environment using in part the positioning data for the camera control device to determine a virtual position and virtual orientation of the virtual camera in the virtual 3D environment. A view of the virtual 3D environment, from a viewpoint of the virtual camera, may be determined using the virtual position and virtual orientation of the virtual camera in the virtual 3D environment, and electronic images of the virtual 3D environment may be rendered as defined by the view. (Sempe: D15: col. 3, lines 31-45) A user may navigate a virtual camera through a virtual 3D environment by physically moving around a physical space with a camera control device to direct a virtual lens of the virtual camera at subjects in the virtual 3D environment. The user may manipulate a view of the virtual 3D environment, as viewed through a simulated camera view finder, using virtual camera controls which may be mapped to the camera control device, or alternatively to another controller device, such as a game controller. The virtual camera controls may allow the user to capture electronic images of the virtual 3D environment and control other camera functions. Electronic images (e.g., video and/or images) of the virtual 3D environment captured using a virtual camera may include events and actions that may be occurring in the virtual 3D environment at the time the electronic images were captured. One of ordinary skill in the art before the effective filing date would have recognized that applying the known techniques of Sempe, in which a user may navigate a virtual camera through a virtual 3D environment by physically moving around a physical space with a camera control device to direct a virtual lens of the virtual camera at subjects in the virtual 3D environment, would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the techniques of Sempe to the teachings of Keeler (Niemeyer)-Hauenstein would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such data processing features into similar systems. Obviousness under 35 USC 103 in view of the Supreme Court decision KSR International Co. vs. Teleflex Inc. Claims 5, 13 and 21 are rejected under 35 USC 103 as being unpatentable over Keeler (Niemeyer) in view of Harper, US 10,049,276. Rejections are based in part on the teachings and rationale applied to claims 1, 9 and 17 by Keeler (Niemeyer) and further upon the combination of Keeler (Niemeyer)-Harper. Although Keeler (Niemeyer) do not expressly mention the use of an audible tutorial of one or more features of the object, Harper would have taught Keeler (Niemeyer) such techniques. In Harper see at least: (Harper: B3: col. 1, lines 40-58) Briefly described, embodiments are directed toward systems and methods of presenting an augmented reality for electronic device installation and troubleshooting. A mobile computing device captures images of an electronics cable and an electronic device. The images of the cable are analyzed to determine non-text characteristics of a connector of the cable. Similarly, the images of the electronic device are analyzed to determine non-text characteristics of at least one port on the electronic device. In some embodiments, these non-text characteristics can be compared to each other to determine if the connector is compatible with one of the ports on the electronic device. In other embodiments, these non-text characteristics can be compared with non-text characteristics of known connectors and ports to determine a type of the connector and a type of the ports on the electronic device. The system can then determine if the cable connector and electronic device ports are compatible based on a comparison of the type of cable connector and the type of ports. (Harper: D15: col. 3, line 54-col. 4, line 6) In one example scenario the user 140 may want to add a new television receiver 122 into their entertainment system 128. The user 140 can position the new television receiver 122 into the entertainment system 128 or the user 140 can place it on the floor or on a table. There will usually be one or more cables or wired connections that extend between the television receiver 122, the display device 124 and one or more peripheral devices 126, however, these are not shown in FIG. 1 for ease of illustration. Their use and locations will be explained in subsequent figures. To receive instructions on how to install the television receiver 122, the user 140 accesses an application or program executing on the mobile computing device 144. This application includes tutorials, instructions, or other information on how to connect the new television receiver 122 to other electronic devices, such as the display device 124 or other peripheral devices 126. Similarly, the application on the mobile computing device 144 may also include troubleshooting information for assessing and fixing various issues that can arise during or after the installation process. (Harper: D27: col. 5, line 55-col. 6, line 7) For example, assume the user needs to know the model number of the electronic device, such as for registering the electronic device with the manufacturer. The user can use the camera on the mobile computing device to take images of the electronic device. These images are utilized to identify a side of the electronic device that is facing the camera. The images can then be augmented with commands, or the mobile computing device can provide audible commands, to instruct the user to turn the electronic device to the correct side that includes the model number of the electronic device. As the user is turning the electronic device, the mobile computing device can continue to capture images and instruct the user to keep turning the electronic device until the model number is visible. At this point, the images can be augmented with arrows, circles, or other text, graphics, or symbols to show in real time where the model number is located on the electronic device. This augmented reality and instructions can guide a user to find information on the electronic devices, buttons, displays, ports, or other features or components of the electronic device. Please note: Buttons, displays, ports or other feature or components are examples of features/attributes. One of ordinary skill in the art before the effective filing date would have recognized that applying the known techniques of Harper, which provide audible instructions/tutorials regarding object features, e.g. adding a receiver to a television system, would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the techniques of Harper to the teachings of Keeler (Niemeyer) would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such data processing features into similar systems. Obviousness under 35 USC 103 in view of the Supreme Court decision KSR International Co. vs. Teleflex Inc. Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 8,239,840 (Czymontek) “Sensor Simulations for Mobile Device Applications,” discloses: (B7: col.1, lines 50-63) The visual programming IDE allows the user to troubleshoot the full functionality of the mobile device applications by simulating the occurrence of certain sensor events. For example, to test functionality of an application that may rely on accelerometer sensor or orientation sensor inputs, the user may select a "simulate shaking" button or may scroll an image of a virtual, on-screen compass, respectively, to simulate the effect that corresponding sensor events would have on the application. The user may monitor the effect of these events and may fine tune the application accordingly, without requiring the mobile device application to actually be deployed on a mobile device, thereby providing reliable, deterministic and reproducible testing of an application. (D104: col. 19, line 55-col. 20, line 3) Because the user may interact with the visual programming IDE on a device that does not include mobile device-specific functionality, such as a desktop computer that does not have the ability to generate accelerometer data, the process 600 allows the user to troubleshoot the applications by simulating the occurrence of certain sensor events. For example, to test functionality of a mobile device application that may rely on the occurrence of certain accelerometer sensor events or orientation sensor events, the process 600 may allow a user to select a "simulate shaking" button or to manipulate a virtual, on-screen compass, respectively, to simulate the effect that corresponding sensor events would have on the application. In doing so, the full functionality of the application may be tested before the application is ever deployed to a mobile device, providing reliable, deterministic and reproducible testing of an application. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT M POND whose telephone number is (571)272-6760. The examiner can normally be reached M-F, 8:30 AM-6:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Smith can be reached at 571-272-6763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT M POND/Primary Examiner, Art Unit 3688 February 19, 2026
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
Jan 09, 2025
Response after Non-Final Action
Aug 23, 2025
Non-Final Rejection — §103
Nov 18, 2025
Examiner Interview Summary
Nov 18, 2025
Applicant Interview (Telephonic)
Nov 25, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103
Apr 01, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597061
AUTOMATED ORDER PLACEMENT, PAYMENT, AND SHIPPING SYSTEM FOR ONLINE SHOP, AND METHOD OF OPERATING ONLINE SHOP FOR AUTOMATED ORDER PLACEMENT, PAYMENT, AND SHIPPING
2y 5m to grant Granted Apr 07, 2026
Patent 12597059
SYSTEM AND METHOD FOR DYNAMIC REAL-TIME CROSS-SELLING OF PASSENGER ORIENTED TRAVEL PRODUCTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597060
GENERATIVE APPAREL RECOMMENDATIONS USING IMAGES OF A PERSON DURING THE COURSE OF A COMMUNICATIONS SESSION AMONG USERS
2y 5m to grant Granted Apr 07, 2026
Patent 12586124
SYSTEM AND METHOD FOR FACILITATING THE RESALE OF GOODS
2y 5m to grant Granted Mar 24, 2026
Patent 12579562
FASHION DATABASE SYSTEM, METHOD FOR CONTROLLING FASHION DATABASE, AND FASHION DATABASE PROGRAM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
99%
With Interview (+42.4%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 695 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month