DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on a PCT application filed in Korea on 10/19/2023. It is noted, however, that applicant has not filed a certified copy of the PCT application.
Information Disclosure Statement
The information disclosure statement filed 05/02/2025 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but where crossed out, the information referred to therein has not been considered.
The information disclosure statement filed 05/02/2025 fails to comply with 37 CFR 1.98(a)(3)(i) because it does not include a concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information, of each reference listed that is not in the English language. It has been placed in the application file, but where crossed out, the information referred to therein has not been considered.
Drawings
The drawings are objected to because:
Drawing sheet numbering (1/21, 2/21, etc.) fails to comply with the standards set forth in 37 CFR 1.84(t), which recites “drawing sheet numbering must be clear and larger than the numbers used as reference characters to avoid confusion.”
Fewer than all lines are “uniformly thick and well-defined” as required by 37 CFR 1.84(l). See Fig. 1, for example, wherein some lines are thicker than others, without any discernible difference in meaning from line to line.
Views are not “grouped together and arranged on the sheet(s) without wasting space” as required by 37 CFR 1.84(h). The Examiner suggests combining multiple drawings on a single sheet, or enlarging and/or rotating some figures to take up more of the available space on each sheet.
Shading does not conform to the standards set forth in 37 CFR 1.84(m), which recites “shading in views is encouraged if it aids in understanding the invention and if it does not reduce legibility. ... Spaced lines for shading are preferred. These lines must be thin, as few in number as practicable, and they must contrast with the rest of the drawings. ... Solid black shading areas are not permitted, except when used to represent bar graphs or color.” See Fig. 5, for example.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: 710, 720, 1340.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character “1430” has been used to designate two distinct steps in the method shown in Fig. 14.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Objections
Applicant is advised that should claim 17 be found allowable, claim 20 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3, 13, and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3 recites “the beamforming” in the last line. It is unclear whether this limitation refers to “beamforming” in claim 1 line 9, or to “beamforming” in claim 3 line 3. For the purpose of applying prior art, this limitation is herein construed as referring to “beamforming” in claim 1 line 9.
A similar recitation in claim 13 is similarly indefinite.
A similar recitation in claim 18 is similarly indefinite.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-2, 5, 10-12, 16-17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 20160360384 A1).
As recited in independent claim 1, Park et al show a wearable device 101 comprising: a microphone 288, a display module 160; at least one speaker 282; at least one processor 120; and memory 130 storing instructions that, when executed by the at least one processor individually or collectively, cause the wearable device to: perform, by using at least one microphone, beamforming (“processor 502 may detect an ambient sound in a specific direction using reception beamforming technology or a beamforming microphone sensor” [0129]) with respect to a plurality of objects located around (The limitation “plurality of objects located around” is construed as an intended use of the claimed wearable device. It is noted by the Examiner that the prior art device is capable of being surrounded by plural objects, such as a doorbell and a visitor.) the wearable device 101; based on the performing of the beamforming (“DETECT AMBIENT ENVIRONMENT INFORMATION”, see step 603 in Fig. 6), output a notification (“OUTPUT CONTENT AND NOTIFICATION INFORMATION”, see step 609 in Fig. 6) corresponding to the plurality of objects (“when a doorbell sound is detected as the notification event, the notification information may include a voice message saying "The doorbell is ringing" corresponding to the generation of the doorbell event”, [0131]; “the electronic device may display brief information regarding the occurrence of the visitor or the visitor” [0134]) through at least one of the at least one speaker (“the notification information may include a voice message saying "The doorbell is ringing"” [0131]) or the display module (“and display the image or the video” [0134]); identify movement of the wearable device in which the wearable device is rotating to face an object among the plurality of objects (“In operation 1203, the electronic device may determine whether an input confirming the notification information is detected or not. For example, the processor 502 of the electronic device 500 may determine whether … a user's motion to react to the notification information, etc. is detected through the sensor module 504.” [0167]); and based on the identifying of the movement, highlight and output the notification corresponding to the object through at least one of the at least one speaker (“the processor 502 of the electronic device 500 may adjust the mixing ratio to increase the output volume of the notification information in comparison to the volume of the audio data of the content” [0168]) or the display module (“display a sign informing that a notification sound is being mixed on a part of the screen of the electronic device 500 in order to display that the audio data of the content and the notification information of the audio form corresponding to the notification event are being mixed as shown in FIG. 14A” [0174]) while maintaining the beamforming with respect to the plurality of objects (“the processor 502 may detect an ambient sound in a specific direction using reception beamforming technology or a beamforming microphone sensor” [0129]).
As recited in independent claim 1, Park et al is silent regarding a plurality of microphones.
There is no invention in duplicating known parts, when the number of parts is not critical. See In re Harza, 274 F.2d 669, 124 USPQ 378 (CCPA 1960) (Claims at issue were directed to a water-tight masonry structure wherein a water seal of flexible material fills the joints which form between adjacent pours of concrete. The claimed water seal has a "web" which lies in the joint, and a plurality of "ribs" projecting outwardly from each side of the web into one of the adjacent concrete slabs. The prior art disclosed a flexible water stop for preventing passage of water between masses of concrete in the shape of a plus sign (+). Although the reference did not disclose a plurality of ribs, the court held that mere duplication of parts has no patentable significance unless a new and unexpected result is produced.). In this case, the record is devoid of evidence of unexpected results due to the number of microphones.
Moreover, the Examiner finds that plural microphones were predictable before the effective filing date.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to duplicate the microphone of Park et al. The rationale is as follows: one of ordinary skill in the art would have had reason to improve beamforming by using plural microphones as was known in the art.
As recited in claim 2, Park et al shows at least one camera (see [0038], “electronic device … may include at least one of, for example, … a camera”; “inputting means may include … a camera” [0102]), wherein the instructions cause the wearable device to, based on an image captured by the at least one camera (“appearance of the visitor as an image or a video” [0134]) and a sound obtained by the microphone (“a doorbell sound is detected” [0131]), identify whether the plurality of objects is located around the wearable device (“electronic device may receive the appearance of the visitor as an image or a video, and display the image or the video” [0134]).
Regarding claim 2: Park et al are silent regarding plural microphones.
See teachings, findings, and rationale above for independent claim 1.
As recited in claim 5, Park et al show, in a case in which the movement of the wearable device is identified (“the processor 502 of the electronic device 500 may determine whether sensor data such as … a user's motion to react to the notification information, etc. is detected through the sensor module 504” [0167]; “IS INPUT CONFIRMING NOTIFICATION INFORMATION DETECTED”, see 1203 in Fig. 12), increase a volume of the notification output through the at least one speaker (“When the electronic device determines that the user cannot recognize the notification information corresponding to the notification event, the electronic device may adjust the output volume of the notification information in operation 1107. For example, the processor 502 of the electronic device 500 may increase the output volume of the notification information at a predetermined rate. In this case, the processor 502 may increase the output volume of the notification information based on the output volume of the content detected in operation 1103” [0163]; “the electronic device may adjust the volume of the notification information based on similarity between the notification sound of the notification information and the audio data of the content” [0164]) or output a designated sound effect (“the output module 510 may mix audio data of the content and the notification information (for example, a notification sound) corresponding to the notification event, and output the content” [0115]).
As recited in claim 10, Park et al show, in a case in which the plurality of objects (see objects in Fig. 17, for example) are identified, control an external electronic device (a washing machine, for example) operatively connected to the wearable device (see arrows in Fig. 17) such that at least one of a sound (insofar as washing machine motors generate sound while washing) generated from the plurality of objects (washing machine, for example) or a prestored sound corresponding to the plurality of objects (it is noted by the Examiner that these limitations are recited in the alternative, such that the claim limitation is met by the prior art disclosure of one alternative, even in the absence of the other) are output together with a content sound (insofar as water entering, contained by, or draining from a washing machine generates a different sound during washing) being output by the external electronic device (washing machine, for example).
As recited in independent claim 11, Park et al show a method for controlling a wearable device, the method comprising: performing, by using a microphone of the wearable device, beamforming (“processor 502 may detect an ambient sound in a specific direction using reception beamforming technology or a beamforming microphone sensor” [0129]) with respect to a plurality of objects located around (The limitation “plurality of objects located around” is construed as an intended use of the claimed wearable device. It is noted by the Examiner that the prior art device is capable of being surrounded by plural objects, such as a doorbell and a visitor.) the wearable device 101; based on the performing of the beamforming, outputting a notification (“OUTPUT CONTENT AND NOTIFICATION INFORMATION”, see step 609 in Fig. 6) corresponding to the plurality of objects (“when a doorbell sound is detected as the notification event, the notification information may include a voice message saying "The doorbell is ringing" corresponding to the generation of the doorbell event”, [0131]; “the electronic device may display brief information regarding the occurrence of the visitor or the visitor” [0134]) through at least one of: at least one speaker (“the notification information may include a voice message saying "The doorbell is ringing"” [0131]) of the wearable device 101; or a display module (“and display the image or the video” [0134]) of the wearable device 101; identifying movement of the wearable device in which the wearable device is rotating to face an object among the plurality of objects (“In operation 1203, the electronic device may determine whether an input confirming the notification information is detected or not. For example, the processor 502 of the electronic device 500 may determine whether … a user's motion to react to the notification information, etc. is detected through the sensor module 504.” [0167]); and based on the identifying of the movement, highlighting and outputting the notification corresponding to the object through at least one of the at least one speaker (“the processor 502 of the electronic device 500 may adjust the mixing ratio to increase the output volume of the notification information in comparison to the volume of the audio data of the content” [0168]) or the display module (“display a sign informing that a notification sound is being mixed on a part of the screen of the electronic device 500 in order to display that the audio data of the content and the notification information of the audio form corresponding to the notification event are being mixed as shown in FIG. 14A” [0174]) while maintaining the beamforming with respect to the plurality of objects (“the processor 502 may detect an ambient sound in a specific direction using reception beamforming technology or a beamforming microphone sensor” [0129]).
As recited in independent claim 11, Park et al are silent regarding a plurality of microphones.
See teachings, findings, and rationale above for independent claim 1.
Regarding claim 12: See above for claim 2.
As recited in independent claim 16, Park et al shows one or more non-transitory computer-readable storage media storing one or more computer programs including computer-readable instructions (“memory 130 may store, for example, commands or data relevant to at least one other component of the electronic device 101. According to an embodiment of the present disclosure, the memory 130 may store software and/or a program 140” [0047]) that, when executed by one or more processors of a wearable device individually or collectively, the wearable device cause the wearable device to perform operations, the operations comprising: performing, by using a microphone 288, of the wearable device 101, beamforming (“processor 502 may detect an ambient sound in a specific direction using reception beamforming technology or a beamforming microphone sensor” [0129]) with respect to a plurality of objects located around (The limitation “plurality of objects located around” is construed as an intended use of the claimed wearable device. It is noted by the Examiner that the prior art device is capable of being surrounded by plural objects, such as a doorbell and a visitor.) the wearable device 101; based on the performing of the beamforming (“DETECT AMBIENT ENVIRONMENT INFORMATION”, see step 603 in Fig. 6), outputting a notification (“OUTPUT CONTENT AND NOTIFICATION INFORMATION”, see step 609 in Fig. 6) corresponding to the plurality of objects (“when a doorbell sound is detected as the notification event, the notification information may include a voice message saying "The doorbell is ringing" corresponding to the generation of the doorbell event”, [0131]; “the electronic device may display brief information regarding the occurrence of the visitor or the visitor” [0134]) through at least one of: at least one speaker (“the processor 502 of the electronic device 500 may adjust the mixing ratio to increase the output volume of the notification information in comparison to the volume of the audio data of the content” [0168]) of the wearable device 101; or a display module (“display a sign informing that a notification sound is being mixed on a part of the screen of the electronic device 500 in order to display that the audio data of the content and the notification information of the audio form corresponding to the notification event are being mixed as shown in FIG. 14A” [0174]) of the wearable device 101; identifying movement of the wearable device in which the wearable device is rotating to face an object among the plurality of objects (“In operation 1203, the electronic device may determine whether an input confirming the notification information is detected or not. For example, the processor 502 of the electronic device 500 may determine whether … a user's motion to react to the notification information, etc. is detected through the sensor module 504.” [0167]); and based on the identifying of the movement, highlighting and outputting the notification corresponding to the object through at least one of the at least one speaker (“the processor 502 of the electronic device 500 may adjust the mixing ratio to increase the output volume of the notification information in comparison to the volume of the audio data of the content” [0168]) or the display module (“display a sign informing that a notification sound is being mixed on a part of the screen of the electronic device 500 in order to display that the audio data of the content and the notification information of the audio form corresponding to the notification event are being mixed as shown in FIG. 14A” [0174]) while maintaining the beamforming with respect to the plurality of objects (“the processor 502 may detect an ambient sound in a specific direction using reception beamforming technology or a beamforming microphone sensor” [0129]).
As recited in claim 16, Park et al are silent regarding a plurality of microphones.
See teachings, findings, and rationale above for independent claim 1.
Regarding claims 17 and 20: See above for claim 5.
Claim(s) 3-4, 13-14, and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 20160360384 A1) as applied above, and further in view of Kim et al (US 20130297319 A1).
Park et al show a wearable device, method, and medium as described above.
As recited in claim 3, Park et al are silent regarding whether the instructions cause the wearable device to, in a case in which the movement of the wearable device is identified, perform beamforming by using at least one other microphone different from the at least one microphone being used for performing the beamforming (to the extent understood).
Regarding claim 3: Kim et al teach that “the mobile device activates only a first microphone sensor used for a call and a second microphone sensor for the voice recognition system. If the second microphone sensor recognizes voice, which means that the voice recognition system is activated, the mobile device deactivates the other microphone sensors including the first microphone sensor. Accordingly, when the user uses the voice recognition system, for example, to search for a phone number during a call, the other sensors except the second microphone sensor are deactivated. As a result, the other party does not hear a voice recognition command and thus is less disturbed” [0108].
Moreover, the Examiner finds that using at least one other microphone different from at least one microphone was predictable before the effective filing date.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to use a different microphone from the microphone Park et al uses for beamforming. The rationale is as follows: one of ordinary skill in the art would have had reason to prevent a called party from overhearing disturbing commands as taught by Kim et al [0108].
As recited in claim 4, Park et al are silent regarding controlling one or more remaining microphones other than the at least one microphone among the plurality of microphones such that the one or more remaining microphones are: operated at low power, minimized performance, or both; or deactivated.
Regarding claim 4: Kim et al teach that “when the mobile device uses a voice recognition system, the mobile device may activate only a specific microphone sensor used for the voice recognition system, while deactivating the other microphone sensors. The mobile device may also change microphones to be activated and deactivated for the voice recognition system according to user selection” [0112].
Moreover, the Examiner finds that deactivating microphones was predictable before the effective filing date.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to deactivate microphones when not in use as suggested by Kim et al. The rationale is as follows: one of ordinary skill in the art would have had reason to save power so as to extend battery life as was known in the art.
Regarding claim 13: See teachings, findings, and rationale above for claim 3.
Regarding claim 14: See teachings, findings, and rationale above for claim 4.
Regarding claim 18: See teachings, findings, and rationale above for claim 3.
Regarding claim 19: See teachings, findings, and rationale above for claim 4.
Claim(s) 6, 9, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 20160360384 A1) as applied above, and further in view of Ko et al (US 20230230334 A1).
Park et al show a wearable device, method, and medium as described above.
As recited in claim 6, Park et al show that the movement of the wearable device is identified (“In operation 1203, the electronic device may determine whether an input confirming the notification information is detected or not. For example, the processor 502 of the electronic device 500 may determine whether … a user's motion to react to the notification information, etc. is detected through the sensor module 504.” [0167]).
As recited in claim 6, Park et al are silent regarding whether to control the display module such that a size of the notification is changed based on a level of risk is output through the display module.
As recited in claim 6, Ko et al show controlling the display module such that a size of the notification is changed (compare “FIRST SIZE” in 360 to “SECOND SIZE” in 370, see Fig. 3) based on a level of risk (see “IMPORTANCE ≥ SECOND IMPORTANCE?” at 350 in Fig. 3; see also [0188] “the external object may be an object dangerous to a user (e.g., a motorcycle or a car approaching a user). In this case, the user needs to recognize and avoid the external object, the electronic device 200 may inform the user of the existence of the external object through the display 231 and provide a warning message. In an embodiment, the processor 220 may additionally output a notification through at least one of the speaker 232 and the haptic module 233 while displaying an indicator on the display 231”) is output through a display module 231.
Moreover, the Examiner finds that basing a size on a level of risk was predictable before the effective filing date.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to base a size of the notification of Park et al on a level of risk as taught by Ko et al. The rationale is as follows: one of ordinary skill in the art would have had reason to empower the user to recognize and avoid a dangerous object as taught by Ko et al (“the external object may be an object dangerous to a user (e.g., a motorcycle or a car approaching a user). In this case, the user needs to recognize and avoid the external object, the electronic device 200 may inform the user of the existence of the external object through the display 231 and provide a warning message. In an embodiment, the processor 220 may additionally output a notification through at least one of the speaker 232 and the haptic module 233 while displaying an indicator on the display 231” [0188]; “displaying an indicator in a second area of the display to be a first size in response to the importance which is equal to or greater than the first importance and less than a second importance, and in response to the importance which is equal to or greater than the second importance, displaying the indicator in a second size larger than the first size on the display” [0203]).
As recited in claim 9, Park et al are silent regarding determining whether a sound generated from the object is a sound belonging to a configured high-risk group; and in a case in which the wearable device determines that the sound generated from the object is a sound belonging to the configured high-risk group, control the at least one speaker or the display module such that the notification output through the at least one speaker is output at a volume greater than or equal to a designated magnitude or the notification output through the display module is output at a size greater than or equal to a designated size.
As recited in claim 9, Ko et al show determining whether a sound generated from the object is a sound belonging to a configured high-risk group (“processor 220 may obtain location information and speed information of the motorcycle through the sensor 210, obtain speed information of the electronic device 200, and obtain the relative direction information on the motorcycle located with respect to the electronic device 200” [0126], wherein “sensor 210 may include … an audio sensor 212” [0049]); and in a case in which the wearable device determines that the sound generated from the object is a sound belonging to the configured high-risk group (motorcycle, for example), control the at least one speaker or the display module such that the notification output through the at least one speaker is output at a volume greater than or equal to a designated magnitude (it is noted by the Examiner that this limitation is recited in the alternative, such that the presence in the prior art of a second alternative satisfies the limitation, even in the absence of the first alternative) or the notification output through the display module is output at a size (second size) greater than or equal to a designated size (“the indicator is displayed in a second size larger than the first size” [0078]).
Moreover, the Examiner finds that increasing a displayed notification size in response to a high-risk object was predictable before the effective filing date.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to increase a size of Park et al’s displayed notification in response to a high-risk object as taught by Ko et al. The rationale is as follows: one of ordinary skill in the art would have had reason to more urgently alert the user as taught by Ko et al (“The user may recognize the external object through the indicator displayed in the second size on the display 231 and may avoid the external object or stop using the virtual reality device” [0147]).
As recited in claim 15, Park et al show a case in which the movement of the wearable device is identified (“In operation 1203, the electronic device may determine whether an input confirming the notification information is detected or not. For example, the processor 502 of the electronic device 500 may determine whether … a user's motion to react to the notification information, etc. is detected through the sensor module 504.” [0167]).
As recited in claim 15, Park et al are silent regarding outputting the notification with a volume changed according to a level of risk through the at least one speaker.
Regarding claim 15: Ko et al teach that “when the processor 220 determines that the importance of the external object is equal to or greater than a second importance, the processor 220 may output a warning sound through the speaker 232 while displaying an indicator on the display 231” [0058].
Official notice is taken of the fact that warning sounds with volume changed according to a level of risk were known prior to the effective filing date.
Moreover, the Examiner finds that warning sound volume changed according to a level of risk was predictable before the effective filing date.
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s effective filing date to change a warning sound volume according to Ko et al’s level of risk through the speaker of Park et al. The rationale is as follows: one of ordinary skill in the art would have had reason to more urgently notify a user of greater risk as was known in the art.
Claim(s) 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al (US 20160360384 A1) as applied above, and further in view of Peterson et al (US Pat. No. 10768699 B2).
Park et al show a wearable device, method, and medium as described above.
As recited in claim 7, Park et al show a case in which the movement of the wearable device is identified (“In operation 1203, the electronic device may determine whether an input confirming the notification information is detected or not. For example, the processor 502 of the electronic device 500 may determine whether … a user's motion to react to the notification information, etc. is detected through the sensor module 504.” [0167]).
As recited in claim 7, Park et al are silent regarding controlling the display module such that a virtual object corresponding to shapes of the plurality of objects are output through the display module.
As recited in claim 7, Peterson et al show controlling a display module such that a virtual object 410 corresponding to a shape of an object (motorcycle) is output through a display module (“responsive to a non-zero threshold amount of time lapsing during which the user does not look in the direction of the object, the text 408 and image 410 may also be presented. Additionally, or alternatively, if the processor of the device might take a moment to identify the identity of the object (a “motorcycle”)”, see col. 8, lines 1-6).
Moreover, the Examiner finds that virtual objects corresponding to the shapes of objects were predictable before the effective filing date.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date to display a virtual object corresponding to a shape of each of the objects of Park et al as taught by Peterson et al. The rationale is as follows: one of ordinary skill in the art would have had reason to display a virtual object in order to help a user locate a real object as taught by Peterson et al (“An icon or image 410 of a motorcycle may also be presented to further convey the subject of the person's exclamation, it being understood that the image 410 is not the subject itself of the person's exclamation but rather a representation of it to help the user locate the actual subject”, see col. 7, lines 59-64).
As recited in claim 8, Park et al show a case in which the movement of the wearable device is identified (“In operation 1203, the electronic device may determine whether an input confirming the notification information is detected or not. For example, the processor 502 of the electronic device 500 may determine whether … a user's motion to react to the notification information, etc. is detected through the sensor module 504.” [0167]).
As recited in claim 8, Park et al are silent regarding whether to control at least one camera comprised in the wearable device such that the wearable device is operated in a video see through (VST) mode which directly shows an external environment.
As recited in claim 8, Peterson et al show at least one camera “system may also include one or more cameras 193 that may gather one or more images” (see col. 5 lines 23-24), and further show a video see through (VST) mode which directly shows an external environment “headsets may also be used to present content as disclosed herein, such as a virtual reality (VR) headset that may present a camera feed of the user's real-world environment on its display so that content as described herein can be overlaid on the camera feed” (see col. 6 lines 4-8).
Moreover, the Examiner finds that a camera and a VST mode were predictable before the effective filing date.
It would have been obvious to one of ordinary skill in the art at the time of Applicant’s effective filing date to include cameras and a VST mode in the device of Park et al as taught by Peterson et al. The rationale is as follows: one of ordinary skill in the art would have had reason to empower the user to see both real-world and overlaid virtual content in an augmented reality system as taught by Peterson et al (“headsets may also be used to present content as disclosed herein, such as a virtual reality (VR) headset that may present a camera feed of the user's real-world environment on its display so that content as described herein can be overlaid on the camera feed”, see col. 6, lines 4-8).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Julie Anne Watko whose telephone number is (571)272-7597. The examiner can normally be reached Monday-Tuesday 9AM-5PM, Wednesday 10:30AM-5PM, Thursday-Friday 9AM-5PM, and occasional Saturdays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
JULIE ANNE WATKO
Primary Examiner
Art Unit 2627
/Julie Anne Watko/Primary Examiner, Art Unit 2627
12/21/2025