Prosecution Insights
Last updated: April 19, 2026
Application No. 18/041,029

AUGMENTING REALITY

Non-Final OA §103
Filed
Feb 08, 2023
Examiner
TEIXEIRA MOFFAT, JONATHAN CHARLES
Art Unit
3700
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Cochlear Limited
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
81%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
222 granted / 312 resolved
+1.2% vs TC avg
Moderate +10% lift
Without
With
+9.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
569 currently pending
Career history
881
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 312 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, filed 12/23/2025, with respect to the claim rejections under 35 USC 101 have been fully considered and are persuasive. The 35 USC 101 rejection of claims 16 and 19-23 has been withdrawn. Applicant's arguments filed 12/23/2025 have been fully considered but they are not persuasive. Regarding independent claim 1, the applicant argues that the prior art does not teach “using one or more sensors of a sensory prosthesis” and “adjusting stimulation provided by the sensory prosthesis”. The applicant argues that Shalon teaches detection using a first device/system and stimulation using a second device/system whereas the present application uses one system. However, Shalon teaches “real-time biofeedback that is based on eating microstructure events could be utilized to monitor and modify human behavior and in particular eating behavior” ([0054]). Therefore, regardless of the number of components the system is split into, the purpose is to implement biofeedback into stimulation in order to modify human consumption behavior. Additionally, it is well-known in the art to incorporate the sensing and stimulation into one system. For example, Fuerst et al (US 10252058 B1); hereinafter Fuerst, teaches "the method may include providing an integrated platform to a user, wherein the platform may include sensors positioned on a headband configured for measuring EEG and a plurality of electrodes configured for application of electro-stimulation". Additionally, the MPEP supports the reasoning that there is no patentably significance to combining the sensors and stimulation. In the MPEP 2141.02 (I), it states that “in determining the differences between the prior art and the claims, the question under 35 U.S.C. 103 is not whether the differences themselves would have been obvious, but whether the claimed invention as a whole would have been obvious. Stratoflex, Inc. v. Aeroquip Corp., 713 F.2d 1530, 218 USPQ 871 (Fed. Cir. 1983); Schenck v. Nortron Corp., 713 F.2d 782, 218 USPQ 698 (Fed. Cir. 1983) (Claims were directed to a vibratory testing machine (a hard-bearing wheel balancer) comprising a holding structure, a base structure, and a supporting means which form "a single integral and gaplessly continuous piece." Nortron argued the invention is just making integral what had been made in four bolted pieces, improperly limiting the focus to a structural difference from the prior art and failing to consider the invention as a whole". Therefore, making the sensors and stimulation integral in the same device compared to just being in the same system makes no patentably significant difference in the functioning of the system. Regarding independent claim 9, the applicant argues that the prior art doesn’t teach that data is collected "from the microphone and the movement sensor of the sensory prosthesis". The MPEP supports the reasoning that there is no patentably significance to combining the sensors and stimulation. In the MPEP 2141.02 (I), it states that “in determining the differences between the prior art and the claims, the question under 35 U.S.C. 103 is not whether the differences themselves would have been obvious, but whether the claimed invention as a whole would have been obvious. Stratoflex, Inc. v. Aeroquip Corp., 713 F.2d 1530, 218 USPQ 871 (Fed. Cir. 1983); Schenck v. Nortron Corp., 713 F.2d 782, 218 USPQ 698 (Fed. Cir. 1983) (Claims were directed to a vibratory testing machine (a hard-bearing wheel balancer) comprising a holding structure, a base structure, and a supporting means which form "a single integral and gaplessly continuous piece." Nortron argued the invention is just making integral what had been made in four bolted pieces, improperly limiting the focus to a structural difference from the prior art and failing to consider the invention as a whole". Therefore, making the sensors and stimulation integral in the same device compared to just being in the same system makes no patentably significant difference in the functioning of the system. Additionally, the specification of the present application contradicts the claim limitation that the microphone and movement sensor must be integrated on the same device. Paragraph [0035] of the present application teaches that “in some examples, the sensory prosthesis 110 includes multiple cooperating components disposed in separate housings. An example sensory prosthesis 110 includes an external component (e.g., having components to receive and process sensory data) configured to communicate with an implantable component (e.g., having components to deliver stimulation to cause a sensory percept in the recipient)”. Therefore, even including the claim limitation that the sensors and stimulation must be incorporated into a single device rather than a single system contradicts the disclosure itself. Regarding claim 16, the applicant argues that the prior art does not teach receiving data from a sensory prosthesis associated with a recipient and adjusting sensory output of the sensory prosthesis based on the consumption behavior of the recipient of the sensory prosthesis. However, Shalon teaches “real-time biofeedback that is based on eating microstructure events could be utilized to monitor and modify human behavior and in particular eating behavior” ([0054]). Therefore, regardless of the number of components the system is split into, the purpose is to implement biofeedback into stimulation in order to modify human consumption behavior. Additionally, it is well-known in the art to incorporate the sensing and stimulation into one system. For example, Fuerst et al (US 10252058 B1); hereinafter Fuerst, teaches "the method may include providing an integrated platform to a user, wherein the platform may include sensors positioned on a headband configured for measuring EEG and a plurality of electrodes configured for application of electro-stimulation". Additionally, the MPEP supports the reasoning that there is no patentably significance to combining the sensors and stimulation. In the MPEP 2141.02 (I), it states that “in determining the differences between the prior art and the claims, the question under 35 U.S.C. 103 is not whether the differences themselves would have been obvious, but whether the claimed invention as a whole would have been obvious. Stratoflex, Inc. v. Aeroquip Corp., 713 F.2d 1530, 218 USPQ 871 (Fed. Cir. 1983); Schenck v. Nortron Corp., 713 F.2d 782, 218 USPQ 698 (Fed. Cir. 1983) (Claims were directed to a vibratory testing machine (a hard-bearing wheel balancer) comprising a holding structure, a base structure, and a supporting means which form "a single integral and gaplessly continuous piece." Nortron argued the invention is just making integral what had been made in four bolted pieces, improperly limiting the focus to a structural difference from the prior art and failing to consider the invention as a whole". Therefore, making the sensors and stimulation integral in the same device compared to just being in the same system makes no patentably significant difference in the functioning of the system. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-9,11-16,and 19-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shalon et al (US20110125063A1); hereinafter Shalon in view Joshi et al (US 20200158819 A1); hereinafter Joshi (both cited previously). Regarding claim 1, Shalon teaches a method comprising: detecting, using one or more sensors of a sensory prosthesis ([0054] real-time biofeedback that is based on eating microstructure events could be utilized to monitor and modify human behavior and in particular eating behavior), behavioral data associated with a recipient of the sensory prosthesis ([0012] accumulating data relating to ingestion behavior); and adjusting stimulation provided by the sensory prosthesis to adjust the consumption behavior ([0139] trigger electrical stimulation). Shalon fails to teach artificial intelligence. Joshi teaches to determine the consumption behavior based on the consumption behavior indicia includes to: apply an artificial intelligence framework to the consumption behavior indicia; and determine the behavior based on an output of the artificial intelligence framework ([0093] learn high level behavior that are indicative of a person's welfare using artificial intelligence techniques). It would have been obvious to a person having ordinary skill in the art before the effective filing date of this invention to modify Shalon with Joshi because there is some teaching, suggestion, or motivation to do so. Shalon teaches that the present technique (of analyzing the data) will then generate summary of such activities and send it out to the human's loved ones, caretaker or even emergency response team depending on the urgency of the situation ([0093]). Therefore, using AI to determine consumption behavior has the obvious benefit of allowing automated communication with loved ones or emergency response teams. Regarding claim 2, the combination of Shalon and Joshi teaches the method of claim 1, wherein detecting the behavioral data includes: obtaining consumption behavior indicia from the one or more sensors ([0107] sensor data); and wherein processing the behavioral data comprises processing the consumption behavior indicia with the artificial intelligence framework to detect the consumption behavior ([0012] generating a signature classifying said ingestion related activity). Regarding claim 3, the combination of Shalon and Joshi teaches the method of claim 2, wherein obtaining consumption behavior indicia from the one or more sensors includes: receiving motion indicia of the consumption behavior with a motion detector ([0382] detection of chewing activity by sensing motion of the skin). Regarding claim 4, the combination of Shalon and Joshi teaches the method of claim 2, wherein obtaining consumption behavior indicia from the one or more sensors includes: receiving motion indicia of the consumption behavior with a motion detector ([0382] detection of chewing activity by sensing motion of the skin). Regarding claim 5, the combination of Shalon and Joshi teaches the method of claim 1, wherein adjusting the stimulation provided by the sensory prosthesis to adjust the consumption behavior includes: selecting stimulation that enhances a pleasurability of the consumption behavior ([0140] electrically stimulate sensitive points…to cause a pleasant sensation); selecting stimulation that decreases a pleasurability of the consumption behavior ([0140] make eating less enjoyable); or selecting stimulation that otherwise alters the consumption behavior ([0139] trigger an electrical stimulation or some other form of feedback that is un-ignorable). Regarding claim 6, the combination of Shalon and Joshi teaches the method of claim 1, further comprising: logging the consumption behavior ([0012] computationally logging said signature); and presenting the logged consumption behavior ([0176] messages that are displayed on a continually or periodic basis on a computer or television screen in real time). Regarding claim 7, the combination of Shalon and Joshi teaches the method of claim 1, wherein detecting the consumption behavior includes detecting the consumption behavior responsive to detecting a specific sensory input ([0382] detection of chewing activity by sensing motion of the skin). Regarding claim 8, the combination of Shalon and Joshi teaches the method of claim 1, wherein the consumption behavior is at least one of an eating behavior ([0012] ingestion behavior), a drinking behavior, a vaping behavior, or a smoking behavior. Regarding claim 9, Shalon teaches a system comprising: A sensory prosthesis of a recipient ([0437] hearing aids); the sensory prosthesis comprising A microphone ([0071]) And a movement sensor ([0382] detection of chewing activity by sensing motion of the skin) and A computing device ([0192] processing unit 14, computer) configured to: Receive, from the microphone ([0120] microphone component for picking up the user's voice, internal cranial sounds, and/or ambient sounds, [0194] bone conduction microphone is designed to sense the acoustic energy generated within the mouth during eating) and the movement sensor ([0382] detection of chewing activity by sensing motion of the skin), consumption behavior indicia regarding the recipient Determine a consumption behavior of the recipient ([0012] generating a signature classifying said ingestion related activity); and Act on the determined consumption behavior to adjust the consumption behavior ([0178] messages that are displayed on a continually or periodic basis on a computer or television screen in real time). Shalon fails to teach artificial intelligence. Joshi teaches to determine the consumption behavior based on the consumption behavior indicia includes to: apply an artificial intelligence framework to the consumption behavior indicia; and determine the behavior based on an output of the artificial intelligence framework ([0093] learn high level behavior that are indicative of a person's welfare using artificial intelligence techniques). It would have been obvious to a person having ordinary skill in the art before the effective filing date of this invention to modify Shalon with Joshi because there is some teaching, suggestion, or motivation to do so. Shalon teaches that the present technique (of analyzing the data) will then generate summary of such activities and send it out to the human's loved ones, caretaker or even emergency response team depending on the urgency of the situation ([0093]). Therefore, using AI to determine consumption behavior has the obvious benefit of allowing automated communication with loved ones or emergency response teams. Regarding claim 11, the combination of Shalon and Joshi teaches the system of claim 9. Joshi further teaches the artificial intelligence framework is a machine learning framework trained on consumption behaviors ([0045] soft sensor for detecting, cooking, and eating habits). Regarding claim 12, the combination of Shalon and Joshi teaches the system of claim 9, wherein to act on the determined consumption behavior includes to: provide a message to the recipient ([0178] messages that are displayed on a continually or periodic basis on a computer or television screen in real time), the message including an indication of the consumption behavior ([0177] messages containing feedback relating to the amount of food eaten. Such an interruption in the eating behavior may be sufficient to condition people to eat less or otherwise modify their eating behavior) ; or adjust a stimulation provided by the sensory prosthesis ([0139] trigger electrical stimulation). Regarding claim 13, the combination of Shalon and Joshi teaches the system of claim 9, wherein to receive the consumption behavior indicia regarding the recipient of the sensory prosthesis includes to: receive auditory indicia from the microphone ([0146] raw audio recordings of ingestion sounds); and receive hand movement indicia or head movement indicia from the movement sensor ([0020] device is capable of sensing ingestion activity related motion or acoustic energy). Regarding claim 14, the combination of Shalon and Joshi teaches the system of claim 9, wherein the microphone is an implanted microphone ([0187] Sensor unit 12 can be adhered to the surface of the skin, implanted subcutaneously, or implanted deeper in the body on or adjacent a bone or other suitable location). Regarding claim 15, the combination of Shalon and Joshi teaches the system of claim 9, wherein the computing device is further configured to receive the additional consumption behavior indicia regarding the recipient of the sensory prosthesis includes to: receive data from a location sensor ([0092] position sensors) , a manual input ([0313] store personal information with manual intervention of the user), a scene classifier, a glucose sensor ([0093] sensor can detect glucose content), an alcohol sensor ([0305] may sense smoke or alcohol directly by incorporating a physical sensor of such items), a blood oxygen sensor ([0432] any of the sensors may detect one of…blood oxygen), a near field communication sensor, or a financial transaction monitor. Regarding claim 16, Shalon teaches a non-transitory computer-readable medium having instructions stored thereon that, when executed by one or more processors cause the one or more processors to: receive data from a sensory prosthesis associated with a recipient ([0012] accumulating data relating to ingestion behavior) log the consumption behavior by a recipient of a sensory prosthesis based on data obtained by the sensory prosthesis ([0012] computationally logging said signature); display information regarding the consumption behavior ([0176] messages that are displayed on a continually or periodic basis on a computer or television screen in real time); and adjust sensory output of the sensory prothesis based on the consumption behavior of the recipient of the sensory prosthesis ([0054] real-time biofeedback that is based on eating microstructure events could be utilized to monitor and modify human behavior and in particular eating behavior). Shalon fails to teach artificial intelligence. Joshi teaches to determine the consumption behavior based on the consumption behavior indicia includes to: apply an artificial intelligence framework to the consumption behavior indicia ; and determine the behavior based on an output of the artificial intelligence framework ([0093] learn high level behavior that are indicative of a person's welfare using artificial intelligence techniques). It would have been obvious to a person having ordinary skill in the art before the effective filing date of this invention to modify Shalon with Joshi because there is some teaching, suggestion, or motivation to do so. Shalon teaches that the present technique (of analyzing the data) will then generate summary of such activities and send it out to the human's loved ones, caretaker or even emergency response team depending on the urgency of the situation ([0093]). Therefore, using AI to determine consumption behavior has the obvious benefit of allowing automated communication with loved ones or emergency response teams. Regarding claim 19, the combination of Shalon and Joshi teaches the non-transitory computer-readable medium of claim 16, wherein, when adjusting the sensory output of the sensory prosthesis, the instructions further cause the one or more processors to: adjust the sensory output of the sensory prosthesis responsive to a sensory input to at least one of encourage ([0140] electrically stimulate sensitive points…to cause a pleasant sensation), discourage ([0140] make eating less enjoyable), or adjust the consumption behavior of the recipient of the sensory prosthesis ([0139] trigger an electrical stimulation or some other form of feedback that is un-ignorable). Regarding claim 20, the combination of Shalon and Joshi teaches the non-transitory computer-readable medium of claim 16, wherein the instructions further cause the one or more processors to provide a message to the recipient ([0178] messages that are displayed on a continually or periodic basis on a computer or television screen in real time), the message including an indication of the consumption behavior ([0177] messages containing feedback relating to the amount of food eaten. Such an interruption in the eating behavior may be sufficient to condition people to eat less or otherwise modify their eating behavior). Regarding claim 21, the combination of Shalon and Joshi teaches the system of claim 9. Shalon teaches the computing device is further configured to: log the consumption behavior ([0175] The information contained in such a log can include eating times and duration, number and intensity of bites, chews or swallows, volume of liquids consumed, and if the food type is identified by the system or the user, in which case, the summary can also include, for example, caloric content and a breakdown of percentages of fats, proteins, sugars and carbohydrates); and output for display the logged consumption behavior ([0138] the system can produce a visual display of information for the user). Regarding claim 22, the combination of Shalon and Joshi teaches the non-transitory computer-readable medium of claim 16. Shalon further teaches when receiving data from the sensory prosthesis ([0146] the system can employ any sensor capable of detecting ingestion activity (eating/drinking)), the instructions cause the one or more processors to receive auditory data from an implanted microphone ([0193] sensor unit 12 is configured for detecting and capturing acoustic energy through a bone conduction microphone). Regarding claim 23, the combination of Shalon and Joshi teaches the non-transitory computer-readable medium of claim 16. Shalon further teaches receiving data from the sensory prosthesis (sensor unit 12), the instructions cause the one or more processors to receive motion data from a motion detector ([0020] said device is capable of sensing ingestion activity related motion). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Dhrasti SNEHAL Dalal whose telephone number is (571)272-0780. The examiner can normally be reached Monday - Thursday 8:30 am - 6:00 pm, Alternate Friday off, 8:30 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carl Layno can be reached at (571) 272-4949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.S.D./Examiner, Art Unit 3796 /CARL H LAYNO/Supervisory Patent Examiner, Art Unit 3796
Read full office action

Prosecution Timeline

Feb 08, 2023
Application Filed
Feb 08, 2023
Response after Non-Final Action
May 05, 2025
Non-Final Rejection — §103
Aug 05, 2025
Response Filed
Sep 25, 2025
Final Rejection — §103
Dec 23, 2025
Request for Continued Examination
Feb 11, 2026
Response after Non-Final Action
Feb 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12350762
SYSTEMS AND METHODS FOR HEIGHT CONTROL IN LASER METAL DEPOSITION
2y 5m to grant Granted Jul 08, 2025
Patent 12349847
MOP HEAD AND SELF-WRINGING MOP APPARATUS AND ASSEMBLY AND METHOD OF WRINGING A MOP
2y 5m to grant Granted Jul 08, 2025
Patent 12352306
Workpiece Support For A Thermal Processing System
2y 5m to grant Granted Jul 08, 2025
Patent 12350227
BUBBLE MASSAGE FLOAT APPARATUS AND METHOD
2y 5m to grant Granted Jul 08, 2025
Patent 12343473
METHOD AND APPARATUS FOR TREATING HYPERAROUSAL DISORDER
2y 5m to grant Granted Jul 01, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
81%
With Interview (+9.9%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 312 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month