Prosecution Insights
Last updated: April 19, 2026
Application No. 19/044,275

CONTROLLING ELECTRONIC DEVICES BASED ON WIRELESS RANGING

Non-Final OA §101§103§DP
Filed
Feb 03, 2025
Examiner
MUNION, JAMES E
Art Unit
2686
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
103 granted / 135 resolved
+14.3% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
30 currently pending
Career history
165
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
52.2%
+12.2% vs TC avg
§102
29.6%
-10.4% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 135 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-2, 6-11 & 15-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4-5, 7-8, 14-15 & 18-19 of U.S. Patent No. 10368378. Although the claims at issue are not identical, they are not patentably distinct from each other because they are claiming the same invention with little change to the claim language. Patent claims are narrower and thus teach all the limitations of the instant claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed subject matter is directed to an abstract idea without significantly more. Claims 1, 10 and 19 are directed to the abstract idea of interpreting a user’s intent and mapping that intent to a command for controlling an object. The claimed steps—“determining an intent to control an object via interaction with a virtual object,” “identifying the object to be controlled,” and “determining a command to control the object”—are evaluative, cognitive, and decision making operations that reflect mental processes (observations, inferences, and mapping of inputs to outputs) and rule based information processing. Such mental processes and conceptual mappings are recognized judicial exceptions to § 101. See Alice Corp. v. CLS Bank Int’l, 573 U.S. 208 (2014); Step 2A — Prong One (Judicial Exception), MPEP § 2106.04. The claim’s final step—“transmitting wirelessly a command value corresponding to the command to control the object”—is an output or communication of the result of the abstract mapping. The virtual object being associated with a physical location separate from the object provides contextual field of use limitation but does not by itself remove the abstract character of the recited cognitive/mapping operations. The claim recites a wireless device and wireless transmission of a command value. However, the claim does not recite a specific technological improvement to the functioning of the wireless device, the communications network, the intent detection technique, or the control of the physical object. The limitations are expressed at a high level of generality and lack concrete technical detail (for example, no specific intent recognition algorithm, sensor modalities, signal processing steps, timing/latency constraints, communication protocol details, authentication/safety mechanisms, or non generic actuation/control logic are recited). As such, the abstract idea is not integrated into a practical application by the recited elements. See Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016) (claims that improve computer functionality are not directed to an abstract idea); Electric Power Group v. Alstom, 830 F.3d 1350 (Fed. Cir. 2016) (collect/analyze/display held abstract). Step 2A — Prong Two (Integration into a Practical Application). The additional claim elements (a generic “wireless device,” a virtual object associated with a separate physical location, and wireless transmission of a command value) are well understood, routine, and conventional activities for implementing control and communication systems absent further limitations. Using generic wireless technology to send a command based on a determination of user intent amounts to implementing the abstract idea on conventional hardware rather than supplying an inventive concept. See Alice; BASCOM Global Internet Servs. v. AT&T Mobility LLC, 827 F.3d 1341 (Fed. Cir. 2016) (conventional components do not automatically supply significantly more). Step 2B — “Significantly More” Analysis. The ordered combination of claim steps—detect/interpret intent → identify target object → determine command → transmit command—reflects a conventional information processing and control workflow (collect/interpret/actuate) and does not recite an unconventional arrangement or technical improvement that would transform the abstract idea into patent eligible subject matter. Absent specification evidence or claim limitations showing that the recited steps are not well understood, routine, or conventional, the claim does not supply “significantly more.” See Berkheimer v. HP Inc., 881 F.3d 1360 (Fed. Cir. 2018) (factual showing required to rebut conventionality). For the reasons stated above, Claims 1, 10 and 19 are directed to an abstract idea (mental processes and mapping of inputs to outputs), and the additional recited elements, individually and as an ordered combination, do not add significantly more. Claims 1, 10 and 19 are therefore rejected under 35 U.S.C. § 101 as being directed to non statutory subject matter. Dependent claims rejected for depending on a rejected base claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over De Schepper (US Patent No. 20140320274 A1), in view of Baker (US Patent No. 20160224036 A1). In re claim 1, De Schepper teaches A method comprising: determining an intent to control an object via interaction with a virtual object (Para [0019]: “Accordingly, when a video camera device identifies an object corresponding to the remotely controllable device, it provides this information so the gesture server device, which monitors the movement of this device.”); identifying the object to be controlled (Para [0019]: “The object can be of any kind, especially a self-identifying object. An example for such a generic gesture is moving a remotely controllable device towards a printer, e.g. towards the physical location of a printer or any kind of virtual representation of the printer.”); determining a command to control the object identified (Para [0019]: “By combining this generic printing gesture with information regarding the dimensions of this remotely controllable device as object, a particular gesture for printing on this remotely controllable device can be defined.”); and transmitting wirelessly a command value corresponding to the command to control the object (Para [0019]: “The gesture server device identifies the printing gesture and controls the remotely controllable device to print a current screen or document.”), wherein the virtual object is associated with a physical location that is separate from the object (Para [0040]: “In step S11, the printer 12 registers to the gesture recognition device 3. Accordingly, the gesture recognition device 3 receives information for recognizing the printer 12, i.e. information regarding the dimensions of the printer 12, i.e. an object definition of the printer 12. This information is provided from the gesture recognition device 3 to all connected devices 9, 11, 12, 13. The printer 12 is not remotely controllable in this embodiment, but a self-identifying object 12. In an alternative embodiment, the printer 12 provides information regarding its dimensions as broadcast to all connected devices 9, 11, 12, 13.”). De Schepper fails to teach by a wireless device. However, Baker teaches by a wireless device (-Para [0012]: “The user may engage the wearable control device, or another device in the load control system, for enabling load control based on gestures performed by the user.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified De Schepper to incorporate the teachings of Baker to provide by a wireless device with the METHOD FOR GESTURE CONTROL, GESTURE SERVER DEVICE AND SENSOR INPUT DEVICE of De Schepper. Doing so enables load control based on gestures performed by the user, as recognized by Baker (Para [0012]). Device claim 10 and computer-readable medium claim 19 are rejected for the same reasons as method claim 1 for having similar limitations and being similar in scope. In re claim 2, De Schepper and Baker teach all of the limitations of claim 1 stated above where De Schepper further teaches further comprising: specifying the physical location associated with the virtual object by at least: positioning the wireless device proximate to or touching the physical location (Para [0019]: “An example for such a generic gesture is moving a remotely controllable device towards a printer, e.g. towards the physical location of a printer or any kind of virtual representation of the printer.”); and activating a virtual icon command via the wireless device (Para [0024]: “In one example, a relative geographic position of different remotely controllable devices can be detected by a sensor input device, e.g. a video camera device, and displayed on a touch screen by icons to enable drag and drop of content between the different remotely controllable devices. The content can then be transferred from the one to the other device by any kind of communication connection, either an existing or an explicitly established communication connection.”). Device claim 11 and computer-readable medium claim 20 are rejected for the same reasons as method claim 2 for having similar limitations and being similar in scope. In re claim 3, De Schepper and Baker teach all of the limitations of claim 2 stated above where De Schepper further teaches wherein activation of the virtual icon command comprises receiving a tactile input via a user interface of the wireless device (Para [0024]: “In one example, a relative geographic position of different remotely controllable devices can be detected by a sensor input device, e.g. a video camera device, and displayed on a touch screen by icons to enable drag and drop of content between the different remotely controllable devices. The content can then be transferred from the one to the other device by any kind of communication connection, either an existing or an explicitly established communication connection.”). Device claim 12 is rejected for the same reasons as method claim 3 for having similar limitations and being similar in scope. In re claim 4, De Schepper and Baker teach all of the limitations of claim 2 stated above where Baker further teaches wherein activation of the virtual icon command comprises interpreting a voice input received via a microphone of the wireless device (Para [0045]: “The control-target device may be identified using another device. For example, the user 222 may select one or more control-target devices or zones on the wireless communication device 232… For example, the user 222 may select the lamp 206 and/or lighting control device 204 on the wireless communication device 232 and may raise the arm on which the wearable control device 228 is being worn to increase the intensity of the lamp 206.” and para [0053]: “The audio commands may also be used to identify the device. The user 222 may say “kitchen lights” to identify the lamps in the kitchen and may raise an arm to increase the dimming level of the identified kitchen lamps. The voice command may identify a location, zone, and/or lighting load for being controlled by the wearable control device 228. The audio commands may also be stored in the gesture datastore and may be accessed at different locations.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of De Schepper and Baker to further incorporate the teachings of Baker to provide wherein activation of the virtual icon command comprises interpreting a voice input received via a microphone of the wireless device with the METHOD FOR GESTURE CONTROL, GESTURE SERVER DEVICE AND SENSOR INPUT DEVICE of De Schepper as modified by Baker. Doing so enables voice commands to identify a location, zone, and/or lighting load for being controlled by the wearable control device 228, as recognized by Baker (Para [0053]). Device claim 13 is rejected for the same reasons as method claim 4 for having similar limitations and being similar in scope. In re claim 5, De Schepper and Baker teach all of the limitations of claim 1 stated above where De Schepper further teaches wherein the virtual object comprises a virtual representation of a controller for controlling the object (Para [0019]: “The gesture server device identifies the printing gesture and controls the remotely controllable device to print a current screen or document.” and para [0025]: “Gesture recognition can imply virtual objects, which are e.g. displayed on a screen or another visualization device. These virtual, objects can be used for gestures as specified above.”). Device claim 14 is rejected for the same reasons as method claim 5 for having similar limitations and being similar in scope. In re claim 6, De Schepper and Baker teach all of the limitations of claim 1 stated above where Baker further teaches wherein determination of the intent to control the object comprises: detecting an intent gesture by measuring an orientation of the wireless device relative to the virtual object (Para [0041]: “The orientation of the wearable control device 228 may be used to determine an angle of the arm of the user 222 when raised or lowered (e.g., from an initial starting point of zero). The different angles at which the user 222 positions an arm may indicate different control-target devices and/or control instructions. For example, the user 222 may position an arm at different angles to identify different zones or scenes for performing load control.”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of De Schepper and Baker to further incorporate the teachings of Baker to provide wherein determination of the intent to control the object comprises: detecting an intent gesture by measuring an orientation of the wireless device relative to the virtual object with the METHOD FOR GESTURE CONTROL, GESTURE SERVER DEVICE AND SENSOR INPUT DEVICE of De Schepper as modified by Baker. Doing so enables indicating different control-target devices and/or control instructions, as recognized by Baker (Para [0041]). Device claim 15 is rejected for the same reasons as method claim 6 for having similar limitations and being similar in scope. In re claim 7, De Schepper and Baker teach all of the limitations of claim 6 stated above where Baker further teaches wherein determination of the command to control the object comprises: detecting an action gesture separate from the intent gesture, the action gesture including movement of the wireless device relative to the virtual object (Para [0041]: “The distance the user 222 raises or lowers an arm may also, or alternatively, be used to indicate different control-target devices and/or control instructions.”). Device claim 16 is rejected for the same reasons as method claim 7 for having similar limitations and being similar in scope. In re claim 8, De Schepper and Baker teach all of the limitations of claim 1 stated above where De Schepper further teaches wherein the virtual object comprises a virtual control defined by the object to be controlled (Para [0019]: “The gesture server device identifies the printing gesture and controls the remotely controllable device to print a current screen or document.”). Device claim 17 is rejected for the same reasons as method claim 8 for having similar limitations and being similar in scope. In re claim 9, De Schepper and Baker teach all of the limitations of claim 8 stated above where De Schepper further teaches wherein the object to be controlled wirelessly advertises (Para [0040]: “In step S11, the printer 12 registers to the gesture recognition device 3. Accordingly, the gesture recognition device 3 receives information for recognizing the printer 12, i.e. information regarding the dimensions of the printer 12, i.e. an object definition of the printer 12. This information is provided from the gesture recognition device 3 to all connected devices 9, 11, 12, 13. The printer 12 is not remotely controllable in this embodiment, but a self-identifying object 12. In an alternative embodiment, the printer 12 provides information regarding its dimensions as broadcast to all connected devices 9, 11, 12, 13.”) the physical location associated with the virtual object (Para [0024]: “In one example, a relative geographic position of different remotely controllable devices can be detected by a sensor input device…” and para [0025]: “Furthermore, the gesture server device can control the remotely controllable device so facilitate identification and registration of this device or an instance thereof. Preferably, the gesture server device contacts the remotely controllable device, i.e. a single instance of this device, and controls it to generate a visually recognizable signal, e.g. to flash a light, to switch on a display, to show a particular display screen, to light an LED, to provide an infrared or ultraviolet LED signal, preferably with a specific sequence. These signals can be recognized by a video camera device, which can thereby detect the location of an instance of a remotely controllable device.”). Device claim 18 is rejected for the same reasons as method claim 9 for having similar limitations and being similar in scope. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES EDWARD MUNION whose telephone number is (571)270-0437. The examiner can normally be reached Monday-Friday 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steven Lim can be reached at 571-270-1210. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES E MUNION/Examiner, Art Unit 2688 03/19/2026
Read full office action

Prosecution Timeline

Feb 03, 2025
Application Filed
Mar 19, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602988
TESTING OF DETECTION AND WARNING FUNCTIONS OF INTERCONNECTED SMOKE, HEAT AND CARBON MONOXIDE ALARMS BY SINGLE PERSON
2y 5m to grant Granted Apr 14, 2026
Patent 12582095
SYSTEMS, METHODS AND DEVICES FOR COMMUNICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12560268
CONDUIT SECURITY TECHNIQUES
2y 5m to grant Granted Feb 24, 2026
Patent 12562045
WEARABLE DEVICE USED AS DIGITAL POOL ATTENDANT
2y 5m to grant Granted Feb 24, 2026
Patent 12552473
CHAIN PIN ASSEMBLY
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+23.5%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 135 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month