Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3 and 6-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Where applicant acts as his or her own lexicographer to specifically define a term of a claim contrary to its ordinary meaning, the written description must clearly redefine the claim term and set forth the uncommon definition so as to put one reasonably skilled in the art on notice that the applicant intended to so redefine that claim term. Process Control Corp. v. HydReclaim Corp., 190 F.3d 1350, 1357, 52 USPQ2d 1029, 1033 (Fed. Cir. 1999). The term “physical representation” in claims 6, 9, 11, and 12 is used by the claim to mean “physical version/form,” while the accepted meaning is “someone or something that represents: such as a: an artistic likeness or image” (Merriam-Webster Dictionary). The term is indefinite because the specification does not clearly redefine the term. Specifically, this terminology is confusing because the physical infant cannot be a representation of itself as the infant is already itself.
The Examiner recommends the Applicant replaces the term, “representation” with “form”, “version”, or a term with a similar meaning. For the purposes of examination, the Examiner assumes the Applicant intended to use “physical form” or “physical version” in place of “physical representation”. The Examiner notes the recitations of “virtual representation” are formal and definite so long as the Applicant intended to use the term “virtual representation” to refer to the infant/tool/object the term represents when the infant/tool/object are depicted within a virtualized environment, such as the simulation.
Claim 3 recites the limitation "the remote computing device" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claim 11 recites the limitation "the remote computing device" in line 3. There is insufficient antecedent basis for this limitation in the claim.
Dependent claims 7-8, 10, 13-15 are rejected by virtue of their dependencies on the claims above.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-8 and 12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Rao [US20160295038A1].
Regarding claim 1, Rao discloses:
A system for simulating interactions with an infant, comprising:
a display (Rao, [0010], “the mobile device/intelligent electronic glasses/headset may display content on a variety of displays and surfaces.” and Rao, Fig. 1, Display 101); and
at least one processor,
wherein the at least one processor is programmed to (Rao, Fig 1 and [0157], “The mobile device/intelligent electronic glasses/headset is enabled with the capability to acquire images by built in camera or external means 109, send 110, receive 111, process with a built in processor 112, store information locally 113, recognize and compare image and other patterns 114, with the composite functions being represented by 108 for providing the needed functionality for wired or wireless communication of images and other information.”):
receive input to add a state (Rao, [0157], “receive 111”. The system is receiving input from the image entity 120 and its various sub-components as seen in Figure 1. Rao discloses adding various states, such as displaying: image, audio, and video in [0158]-[0159].);
receive input setting one or more parameters associated with the state (Rao, [0157], “receive 111”. The system is receiving input for setting one or more parameters associated with the state as disclosed in [0109]: “enable an image to be dynamically constructed and or deconstructed as a virtual digital image entity by software means of the image itself and or in association with other related parameters such as voice, text, data and other related information.”);
cause content to be presented based on the parameters via the display (See citation directly above where the parameters are utilized in displaying content. Through the parameters, such as voice, text, data, and other related information, the image entity is displayed.);
save the parameters (Rao, [0157], “store information locally 113”);
receive a selection of the state (Rao, [0031], “a user may select to activate a rear camera so that it is displayed in the left or right lens.” In this example, Rao discloses a user selecting a component to set the state of visualization.); and
in response to receiving the selection of the state, cause a simulated infant in the simulation to be presented based on the one or more parameters (Rao, [0137], “the system may allow for a baby to be monitored by the camera and interacted with a remote person such as a parent, baby sitter, or teacher using a local projector.”).
The Examiner would like to note that various portions, but not all, of the cited prior art are not explicitly for the purposes of monitoring and interacting with an infant. However, the prior art discloses a system capable of monitoring and interacting with an infant. However, a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim.
Regarding claim 2, Rao discloses:
The system of claim 1, wherein the at least one processor is further programmed to:
receive, during the simulation, an indication that user input has been received (In [0031] and [0137] as cited above, the indication of user input is the response of the system by displaying the camera feed.);
select a second state based on the user input (Rao, [0137], “The intelligent table may be used for interacting with people in the room.” As a result of a user selecting to monitor a baby, the system selects a second state where the intelligent table may be interacted with.); and
cause the simulated infant to be updated based on the second state (The simulated infant is updated in a manner as to allow the user to interact with the infant.).
Regarding claim 3, Rao discloses:
The system of claim 2, wherein the at least one processor is further programmed to:
transmit parameters associated with the second state to the remote computing device (Rao, [0137], “FIG. 16 shows a hand interacting with an intelligent table, where the table has a touch sensitive display.” As also discussed in claim 2, the intelligent table may be interacted with by the user. The touch sensitive display, as displayed in Fig 13 includes various parameters, such as text parameters, which are transmitted to the system during user interaction.).
Regarding claim 4, Rao discloses:
The system of claim 1, wherein the at least one processor is further programmed to:
receive, during the simulation from a remote computing device, an image of the simulated infant being presented by the remote computing device (Rao, [0137], “the system may allow for a baby to be monitored by the camera and interacted with a remote person such as a parent, baby sitter, or teacher using a local projector.”); and
present the image of the simulated infant via the display (In the cited scenario, Rao discloses using, “a local projector”. However, Rao also discloses using “[intelligent] electronic glasses” and “mobile devices” for presenting content.).
Regarding claim 5, Rao discloses:
The system of claim 4, wherein the at least one processor is further programmed to:
receive, via a user interface, a selection of a user interface element (Rao, [0135], “FIG. 14. shows a hand of an individual as viewed through the display of an intelligent electronic glasses or headset. The keyboard may be shown in the display and not actually projected onto the hand, thereby enabling a virtual projection onto the hand in which the keyboard as shown is super-imposed onto the image of hand and viewed through a display.”); and
store an annotation to the simulation to be saved in connection with the simulation (Rao, [0044], “a mobile device may be used and may display on the screen images acquired from a separate camera such as those on a pair of glasses where the images are annotated with content”).
Regarding claim 6, Rao discloses:
A system for simulating interactions with an infant, comprising:
a head mounted display (Rao, “[Intelligent] electronic glasses” as cited above and seen in Figs 12-15, 17-21, 25-27, and 31-32.) comprising:
a display (Rao, Fig 14, Display 1408); and
at least one processor,
wherein the at least one processor is programmed to:
join a simulation of an infant (Rao, [0137], “the system may allow for a baby to be monitored by the camera and interacted with a remote person such as a parent, baby sitter, or teacher using a local projector.”);
receive content from a server (Rao, [0011], “It is an aspect of the present disclosure to enable image based communication between mobile device/intelligent electronic glasses/headset, distributed image sensors, stationary devices, and servers by wired or wireless communication means.”);
cause the content to be presented anchored at a location corresponding to a physical representation of an infant (Rao, Fig 24 and [0145], “Food imagery may be processed for image recognition and nutritional value using a local or network server.” Content pulled from a server is overlayed on various objects where the objects are identified via object recognition.);
receive, from a remote device, one or more parameters associated with the simulated infant (Rao, [0145], “Food imagery may be processed for image recognition and nutritional value using a local or network server. A report on nutritional value of food consumed and recommendations on behavioral improvements may be created and delivered daily.” The cited portion relates to food; however, the system, through object detection, is capable of receiving one or more parameters associated with the simulated infant.); and
cause presentation of the content to be updated based on the one or more parameters (Rao discloses updating the content through the various annotations and notes overlayed on the detected objects.).
Regarding claim 7, Rao discloses:
The system of claim 6, wherein the at least one processor is further programmed to:
receive, from the remote device, one or more updated parameters associated with the simulated infant (See claim 6 for information on this limitation. Regarding the “updated” portion, Rao, [0145], “This may allow for a personal diary of continuous data to be created.” Thereby disclosing the parameters being updated.); and
cause presentation of the content to be updated based on the one or more updated parameters (Rao discloses annotating objects with notes. As the information of the notes is updated (see directly above), the content is updated due to the alteration of the note.).
Regarding claim 8, Rao discloses:
The system of claim 7, wherein the at least one processor is further programmed to:
determine that user input has been received (Rao, [0135], “FIG. 14. shows a hand of an individual as viewed through the display of an intelligent electronic glasses or headset. The keyboard may be shown in the display and not actually projected onto the hand, thereby enabling a virtual projection onto the hand in which the keyboard as shown is super-imposed onto the image of hand and viewed through a display. The interaction with this keyboard may be using a finger or pen. The finger or pen movement may be tracked by a camera or laser housed on the mobile device/intelligent electronic glasses/headset.”);
transmit, to the remote device, an indication that the user input has been received (Rao, [0135], “interaction with this keyboard”); and
receive, subsequent to transmitting the indication, the one or more updated parameters associated with the simulated infant (Rao discloses annotating objects with notes. The user would utilize the keyboard for such user-performed annotations.).
Regarding claim 12, Rao discloses:
The system of claim 6, wherein the at least one processor is further programmed to:
detect a position of an object in proximity to the physical representation of the infant (See citations of Rao in Claim 9.); and
cause presentation of the content to be updated based on the position of the object (See citations of Rao in Claim 9.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Rao and Sarria [US20230021433A1].
Regarding claim 9, Rao discloses:
The system of claim 6, wherein the at least one processor is further programmed to:
detect a position of an object in proximity to the physical representation of the infant (Rao, [0020], (emphasis added), “the intelligent electronic glasses may record an individual touching of an object and classify the object in a data and the object is connected to various meta-data including location, time of day, temperature, proximity to people and other factors.”).
Rao does not explicitly disclose causing a virtual representation of an object to be displayed as a result of detecting a position of an object with relation to the proximity of another object.
Sarria, however, discloses:
in response to detecting the position of the object in proximity to the physical representation of the infant, cause a virtual representation of a medical device to be presented in connection with the content (Sarria, [0042], “As further illustrated in the AR scene shown in FIG. 2A, AIE groups 204b-204n are rendered proximate to the hand and arm of the user 100.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to display virtual objects based on proximity by including the software for displaying said virtual objects with gesture recognition as in the improvement discussed in Sarria in the system executing the method of Rao. As in Sarria, it is within the capabilities of one of ordinary skill in the art to display virtual objects to an augmented reality system with the predicted result of increasing immersion for the user as needed in Rao.
Regarding claim 10, Rao/Sarria discloses:
The system of claim 9,
wherein the object is a finger of the user (Sarria, [0042], “As further illustrated in the AR scene shown in FIG. 2A, AIE groups 204b-204n are rendered proximate to the hand and arm of the user 100.”), and
wherein the medical device is a stethoscope (Sarria, [0006], “the AIEs that are selected for generating in the AR scene can be based on a model and the selected AIEs can be based on the interests and preferences of the user.” Although Sarria does not explicitly disclose a stethoscope, Sarria discloses generating the AIE based on a model of interest. The exact model is based on user preference (See intended use note in the section of claim 1 above.) and design choice aesthetics (See MPEP 2144.04, Section I).).
Regarding claim 11, Rao/Sarria discloses:
The system of claim 9, wherein the at least one processor is further programmed to:
transmit, to the remote computing device, a position of the object in proximity to the physical representation of the infant (Sarria, [0092], “the controller device communicates via a wireless or wired connection with the client device to transmit inputs from the controller device to the client device.”).
Claims 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Rao and Devam [US20190206134A1].
Regarding claim 13, Rao discloses detecting the position of an object as cited in the claims above, but Rao does not explicitly disclose the system causing a heart rate to be presented.
Devam, however, discloses:
a heart rate to be presented (Devam, [0155], “FIG. 9 shows a sample HUD configuration. The four vital signs being monitored, temperature, oxygen saturation, pulse rate and blood pressure are shown in the top left, top right, bottom left, and bottom right corners respectively.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have displayed the heart rate of the subject being viewed for the user to visualize as in Devam in the system executing the method of Rao with the motivation of introducing the system of Rao to the medical market.
Regarding claim 14, Rao/Devam discloses:
The system of claim 13, wherein the heart rate is presented using a user interface element (Devam, [0155], “FIG. 9 shows a sample HUD configuration. The four vital signs being monitored, temperature, oxygen saturation, pulse rate and blood pressure are shown in the top left, top right, bottom left, and bottom right corners respectively.”).
Regarding claim 15, Rao/Devam discloses:
The system of claim 13, wherein the heart rate is presented using an audio signal (Devam, [0101], “FIG. 3 illustrates a system 1400 for auditory cardiographic analysis. In some embodiments, an apparatus for auditory cardiographic analysis (ACA) device 1040 comprises a sensor 1004, which is pressed against the chest of the subject in a location where the heartbeat can be heard.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZACHARY JOSEPH POLLOCK whose telephone number is (703)756-5952. The examiner can normally be reached Monday-Friday 10:00am-8:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Z.J.P./Examiner, Art Unit 3715
/XUAN M THAI/Supervisory Patent Examiner, Art Unit 3715