Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 29, 2025 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The present rejection(s) reference specific passages from cited prior art. However, Applicant is advised that the rejections are based on the entirety of each cited prior art. That is, each cited prior art reference “must be considered in its entirety”. (See MPEP 2141.02(VI)) Therefore, Applicant is advised to review all portions of the cited prior art if traversing a rejection based on the cited prior art.
Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over BEBIRD® R3 Ear Wax Removal Cleaner, 0.15 inch 1080P HD Ear Camera Lens with 6 LED Lights Intelligent Otoscope for iPhone, Android Phone(White) Amazon, 2021 (retrieved on 12/02/2024) <URL: https://www.amazon.com/BEBIRD®-Removal-0.15inch-Intelligent-Otoscope/dp/B097GN36GC> (Year: 2021) – “Bebird”, in view of Bebird R3 Smart Visual Ear Cleaner Instruction Manual, https://manuals.plus/ae/1005007326707919, Year: 2021 – “Bebird R3 Instruction manual”), Ahmed et al. (US PGPUB 2021/0142492 – “Ahmed”), Hara et al. (US Patent 7,057,645 – “Hara”), and Agusanto et al. (US PGPUB 2007/0236514 – “Agusanto”).
Regarding Claim 1, Bebird discloses:
An image processing method for an ear cleaning arrangement is executed on one or more processors (Examiner-amended Bebird FIG. 1, smart phone having one or more processors as shown below) for processing image information collected by an ear cleaning arrangement (Bebird FIG. 1, ear cleaning arrangement),
PNG
media_image1.png
756
768
media_image1.png
Greyscale
wherein the ear cleaning arrangement comprises
a cleaning assembly (Bebird FIG. 1, cleaning assembly),
an ear spoon assembly (Bebird FIG. 1, ear spoon assembly)
a fixing assembly (Bebird FIG. 1, fixing assembly)
a camera element (Bebird FIG. 1, camera element), and
a light source assembly (Bebird FIG. 1, light source assembly),
the camera element and the light source assembly are mounted on the fixing assembly (Bebird FIG. 1), and
the cleaning assembly is disposed at one end of the fixing assembly (Bebird FIG. 1) so that
at least part of the cleaning assembly is within the image capture range of the camera element (Bebird FIG. 1).
Bebird R3 Instruction manual explicitly teaches before inserting the ear spoon assembly into an ear canal of a user to move the camera element therewithin, selecting a frame from the video stream collected by the camera element to be recorded as a recorded image and recognizing a target captured in the selected frame to be processed by analyzing the recorded image (Bebird R3 Instruction manual, 5.1 Ear Canal Examination & Cleaning, “2. Gently insert the device into the ear canal while observing the live feed on your smartphone”. Examiner interprets this passage as aiming the ear spoon assembly through the ear canal opening and into the ear canal itself.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Bebird R3 Instruction manual’s instruction for entering the ear canal with the method disclosed by Bebird. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of an ear cleaning method that includes maneuvering to the target cleaning area.
Bebird R3 Instruction manual further teaches:
P5. analyzing the selected image to determine the corresponding image of the ear spoon assembly and the target pointed by the image of the ear spoon assembly; and
P6. generating a movement direction to the target on an image display, wherein the ear cleaning arrangement is guided to move along the movement direction to reach the target (Bebird R2 Instruction manual, 7.1 Ear Canal Examination & Cleaning, “3. Carefully maneuver the ear tip to remove visible earwax.”).
Bebird and Bebird R3 Instruction manual does not explicitly teach:
an attitude sensor, and
the method comprises the steps of:
P1. after inserting the ear spoon assembly into an ear canal of a user to move the camera element therewithin, selecting a frame from the video stream collected by the camera element to be recorded as a recorded image and recognizing a target captured in the selected frame to be processed by analyzing the recorded image;
P2. via the attitude sensor, acquiring attitude information corresponding to the time point of the recorded image to obtain recorded attitude information describing an orientation of the ear spoon assembly of the ear cleaning arrangement;
P3. during a process of the ear cleaning arrangement moving towards the target, determining whether a moving direction of the ear spoon assembly the target to be processed has changed, wherein if the ear spoon assembly has not changed its moving direction the target has not changed, executing step P4, and if the ear spoon assembly has changed its moving direction, executing step P1; and
P4. via the attitude sensor, recognizing the attitude of the ear spoon assembly of the ear cleaning arrangement and determining whether the ear cleaning arrangement is moving, and if moving occurs, adopting a corresponding image display method which comprises the steps of:
P41. acquiring a segment of the video stream and selecting a frame in the segment as a current selected image;
P42. via the attitude sensor, acquiring current attitude information of the current selected image;
P43. comparing the recorded attitude information with the current attitude information to determine whether the ear cleaning arrangement has changed its moving direction, and if not, executing step P44;
P44. acquiring another current video stream for comparison with the current video stream to obtain video stream variation characteristics; and
P45. updating the selected image at a preset time after the selected image is captured based on the video stream variation characteristics.
Ahmed teaches:
an attitude sensor (Ahmed FIG. 1, Inertial Measurement Unit (IMU) 71), and
the method comprises the steps of:
P1. after inserting the ear spoon assembly into an ear canal of a user to move the camera element therewithin, selecting a frame from the video stream collected by the camera element (Ahmed FIG. 1, camera 60) to be recorded as a recorded image (Ahmed FIG. 3, step block S304; Ahmed paragraph [0075], “The first step is to acquire an earlier image…of an environment from an image stream captured by a camera (S302)”) to be recorded as a recorded image (Ahmed paragraph [0075], “the camera 60 acquires an image stream, including an earlier image and a later image…the later image is a current image in the image stream, and the earlier image is acquired from a memory (e.g. ROM 121, RAM 122)”) and recognizing a target (Ahmed FIG. 4A, object feature 408) captured in the selected frame to be processed by analyzing the recorded image (Ahmed paragraph [0077], “distinguish between object features 408, 408′”);
P2. via the attitude sensor, acquiring attitude information corresponding to the time point of the recorded image to obtain recorded attitude information describing an orientation of the ear spoon assembly of the ear cleaning arrangement (Ahmed FIG. 1, Inertial Measurement Unit (IMU) 71; Ahmed FIG. 2, processing section 167; Ahmed paragraph [0055], “processing section 167 acquires a captured image from the camera 60 in association with time…The processing section 167 calculates, using the calculated spatial relation and detection values of acceleration and the like detected by the IMU 71, a rotation matrix for converting a coordinate system fixed to the camera 60 to a coordinate system fixed to the IMU 71”);
P3. during a process of the ear cleaning arrangement moving towards the target, determining whether a moving direction of the ear spoon assembly to the target to be processed has changed (Ahmed FIG. 4, earlier image 400 and later image 400’), if the ear spoon assembly has changed its moving direction, executing step P1 (Ahmed paragraph [0086], “if object starts moving in later image 400′ at time ‘t’ (later image 400′), pose as well as pose of the camera with respect to the world should be determined.” See also Ahmed paragraph [0087]).
Ahmed further teaches wherein the step P4 further comprises the following steps:
P41. acquiring a segment of the current video stream and selecting a frame as the current selected image (Ahmed FIG. 3, step block S304; Ahmed paragraph [0075], “The first step is to acquire an earlier image…of an environment from an image stream captured by a camera (S302)”);
P42. via the attitude sensor, acquiring current attitude information of the current selected image (Ahmed paragraph [0055], “processing section 167 acquires a captured image from the camera 60 in association with time…The processing section 167 calculates, using the calculated spatial relation and detection values of acceleration and the like detected by the IMU 71, a rotation matrix for converting a coordinate system fixed to the camera 60 to a coordinate system fixed to the IMU 71”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Ahmed’s image processing method with the ear cleaning arrangement taught by Bebird in view of Bebird R3 Instruction manual. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of an ear cleaning arrangement that is able to determine its position based on similarities/differences in frames from a video stream for a target.
Bebird in view of Bebird R3 Instruction manual and Ahmed does not explicitly teach if the ear spoon assembly has not changed its moving direction, via the attitude sensor, recognizing the attitude of the ear spoon assembly of the ear cleaning arrangement and determining whether the ear cleaning arrangement if moving, and if moving occurs, adopting a corresponding image display method (executing step P4).
Hara teaches if the ear spoon assembly has not changed its moving direction, via the attitude sensor, recognizing the attitude of the ear spoon assembly of the ear cleaning arrangement and determining whether the ear cleaning arrangement if moving (Hara FIG. 2, camera shake detector 312; Hara FIG. 5A and Hara FIG. 5B, showing shaking motion of digital camera 100 relative to object “O”), and if moving occurs, adopting a corresponding image display method (Hara col. 4, lines 62-66, “image data processor 308 converts the final image data from the image data accumulator 307 to NTSC signal or PAL signal for displaying a monitor image on the monitor display 130 and outputs the converted signal to the monitor display 130”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Hara’s camera shake detector and method of detecting camera shake with the image processing method taught by Bebird in view of Bebird R3 Instruction manual and Ahmed. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of an image processing method that corrects blurring in an image as caused by camera shake (see Hara FIG. 2, image data corrector 306; Hara col. 5, lines 1-18).
Bebird in view of Bebird R3 Instruction manual, Ahmed, and Hara does not explicitly teach P43. comparing the recorded attitude information with the current attitude information to determine whether the ear cleaning arrangement has changed its moving direction, and if not, executing step P44; P44. acquiring another current video stream for comparison with the current video stream to obtain video stream variation characteristics; and P45. updating the selected image at a preset time after the selected image is captured based on the video stream variation characteristics.
Agusanto teaches P43. comparing the recorded attitude information with the current attitude information to determine whether the ear cleaning arrangement has changed its moving direction, and if not, executing step P44; P44. acquiring another current video stream for comparison with the current video stream to obtain video stream variation characteristics; and P45. updating the selected image at a preset time after the selected image is captured based on the video stream variation characteristics (Agusanto FIG. 1, display 125 connected to video camera 103 in probe 101; Agusanto FIG. 19, object 801 and camera 831; Agusanto paragraph [0126], “FIG. 19 illustrates an arrangement in which a single video camera (831) can be moved within the probe (803) to obtain images of different viewpoints for stereo display.”; Agusanto paragraph [0128], “When the probe is stationary relative to the target…the video camera can be moved by the guiding structure to take real world images from different viewpoints.”)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Agusanto’s stereoscope imaging method with the method taught by Bebird in view of Bebird R3 Instruction manual, Ahmed, and Hara. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a method that generates multiple viewpoints of a target, in order to more fully identity the target and its location.
Regarding Claim 2, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teaches the features of Claim 1, as described above.
Ahmed further teaches:
wherein the step P1 further comprises the following steps:
P11. acquiring the video stream collected by the camera element; P12. selecting the frame from the acquired video stream as a selected image (Ahmed FIG. 3, step block S304; Ahmed paragraph [0075], “The first step is to acquire an earlier image…of an environment from an image stream captured by a camera (S302)”).
Bebird further discloses:
P13. defining the target pointed by the ear spoon assembly in the selected image as the target, wherein if the target is an earwax, executing step P14, and if the target is not the earwax, executing step P11; and
P14. confirming the target and defining the selected image as the recorded image
P13. defining the target pointed by the ear spoon assembly in the selected image as the target, wherein if the target is earwax, executing step P14, and if the target to be processed is earwax, executing step P11; and P14. confirming the target and defining the selected image as the recorded image (Bebird page 1, “lights the ear inspection area and capture…images or…videos, which is easier to help you…remove the earwax safely. With ear tips, which is make of stainless steel and silicone, with suitable angles of tilt, it can clean earwax effectively.” Examiner interprets this passage as teaching that the otoscope disclosed by Bebird is used to examine the ear canal at all areas in order to identify and remove earwax therein.).
Bebird R3 Instruction manual further teaches wherein the movement direction is generated to guide the ear cleaning arrangement within the ear canal to reach the earwax in the ear canal (Bebird R3 Instruction manual 7.1 Ear Canal Examination and Cleaning, “3. Carefully maneuver the ear tip to remove visible earwax”); and
wherein the step P4 further comprises the step of: P46. displaying the preset image on the image display (Bebird R3 Instruction manual 3. Package Contents, showing image of ear canal displayed on the user’s smart phone; Examiner notes that the present patent application defines “present image” as an image that is generated for “a preset time based on the video stream variation characteristics” (paragraph [0073] of the present specification), without defining when the preset time is reaches of what the video stream variation characteristics are. A such, Examiner interprets the preset image as an arbitrary image captured by the ear cleaning arrangement.).
Regarding Claim 3, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teach the features of Claim 2, as described above.
Claim 3 (currently amended) The image processing method, as recited in Claim 1,
Bebird R3 Instruction manual further teaches wherein the step P1 further comprises the following steps: P11. acquiring the video stream collected by the camera element; P12. selecting the frame from the acquired video stream as a selected image; P13. defining the target pointed by the ear spoon assembly in the selected image as the target, wherein if the target is an entrance of the ear canal, executing step P14, and if the target is not the entrance of the ear canal, executing step P11; and P14. confirming the target and defining the selected image as the recorded image, wherein the movement direction is generated to guide the ear cleaning arrangement to reach the entrance of the ear canal (Bebird R3 Instruction manual 7.1 Ear Canal Examination & Cleaning, “2 Gently insert the device into the ear canal while observing the live feed on your smartphone. 3. Carefully maneuver the ear tip to remove visible earwax. Avoid pushing wax deeper or touching the eardrum.” Examiner interprets these instructions as, based on the captured/selected/displayed images, guiding/maneuvering the device until it is proximate to the target/earwax, regardless of where the target/earwax is in the ear.)
Agusanto further teaches wherein the step P4 further comprises a step of P46 displaying the preset image on the image display (Agusanto FIG. 2, video image 201 of object 205).
Regarding Claim 4, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teaches the features of Claim 2, as described above.
Ahmed further teaches wherein the step P4 further comprises the following steps:
P4A. acquiring a segment of the current video stream and selecting the frame as the current selected image, recording the current selected image as a currently recorded image (Ahmed FIG. 3, block S302, “acquire an earlier image and a later image of an environment from an image stream captured by a camera”);
recording the attitude information of the current selected image as recorded attitude information; P4B. acquiring the current attitude information of the current selected image through the attitude sensor and recording the current attitude information; and P4C. comparing the recorded attitude information with the current attitude information to determine whether the ear cleaning arrangement has changed the moving direction of the ear cleaning arrangement (Ahmed FIG. 1, Inertial Measurement Unit (IMU) 71; Ahmed FIG. 2, processing section 167; Ahmed paragraph [0055], “processing section 167 acquires a captured image from the camera 60 in association with time…The processing section 167 calculates, using the calculated spatial relation and detection values of acceleration and the like detected by the IMU 71, a rotation matrix for converting a coordinate system fixed to the camera 60 to a coordinate system fixed to the IMU 71”).
Agusanto further teaches if the image sensor has not moved, displaying the recorded image (Agusanto paragraph [0089], “information indicating the real time location relation between the object (111) and the probe (101) and the real time viewpoint for the generation of the real time display of the image for guiding the navigation of the probe is recorded so that, after the procedure, the navigation of the probe may be reviewed from the same sequence of viewpoints”).
Regarding Claim 5, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teach the features of Claim 4, as described above.
Ahmed further teaches:
wherein the step P4 further comprises the following steps:
P4a. acquiring a segment of the current video stream and selecting a frame as the current selected image (Ahmed FIG. 3, step block S304; Ahmed paragraph [0075], “The first step is to acquire an earlier image…of an environment from an image stream captured by a camera (S302)”);
P4b. analyzing the attitude information corresponding to the time point of the current selected image to obtain rotation variation data and orientation variation data (Ahmed FIG. 1, Inertial Measurement Unit (IMU) 71; Ahmed FIG. 2, processing section 167; Ahmed paragraph [0055], “processing section 167 acquires a captured image from the camera 60 in association with time…The processing section 167 calculates, using the calculated spatial relation and detection values of acceleration and the like detected by the IMU 71, a rotation matrix for converting a coordinate system fixed to the camera 60 to a coordinate system fixed to the IMU 71”).
Agusanto further teaches:
P4c. determining whether the selected image corresponds to a first operation mode or a second operation mode, wherein if the first operation mode, executing step P4d, and if the second operation mode, executing step P4e, wherein the first operation mode is defined as moving from a horizontal direction to a vertical direction, and the second operation mode is defined as moving from a vertical direction to a horizontal direction (Agusanto FIG. 1 shows camera 103 operating in three-axes (X,Y,Z); Agusanto paragraph [0070], “the position tracking system (127) can compute the position and orientation of the probe (101) in the coordinate system (135) of the position tracking system (127)”; and thus the camera/probe operate in both vertical and horizontal directions);
P4d. defining the angle between a preset axis of the attitude sensor and the horizontal direction as β, and determining whether |β-90| is less than or equal to a preset angle θ, wherein if is less than or equal to a preset angle θ , executing step P4e, and if is more than a preset angle θ, executing step P4f (Examiner interprets preset angle θ as an arbitrary angle that is not limited/defined by the specification, such that there is no limitation described by a preset axis of the attitude sensor and the horizontal direction as β, and determining the value of|β-90|); P4e. adjusting the video stream collected by the camera element based on the orientation variation data and displaying on the image display; and P4f. adjusting the video stream collected by the camera element based on the rotation variation data and displaying on the image display (Agusanto FIG. 4, showing probe 203 and object 309 displayed as an image 401; Examiner interprets steps P4e and P4f as simply displaying what the video camera 103 in Agusanto FIG. 1 images).
Regarding Claim 6, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teach the features of Claim 1, as described above.
Bebird further discloses wherein the fixing assembly comprises a fixing rod (Bebird FIG. 1, fixing rod securing cleaning assembly/ear spoon assembly) and a handle (Bebird FIG. 1, handle), wherein the fixing rod is connected to the handle, wherein the cleaning assembly comprises an ear spoon assembly which is arranged to be connected to the fixing rod for cleaning earwax.
Regarding Claim 7, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teaches the features of Claim 6, as described above.
Bebird further discloses wherein the fixing rod comprises a rod body (Bebird FIG. 1, fixing rod body) and a light guide barrel (Bebird FIG. 1, light guide barrel) extended from the rod body, wherein the camera and the light source are assembled inside the rod body and the light guide barrel is positioned in front of the camera and the light source.
Regarding Claim 8, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teaches the features of Claim 7, as described above.
Bebird further discloses wherein the ear spoon assembly is detachably connected to the light guide barrel of the fixing rod (Bebird FIG. 1, showing light guide barrel detached from cleaning assembly with the cleaning/ear spoon assembly).
Regarding Claim 9, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teaches the features of Claim 8, as described above.
Bebird further discloses wherein the ear spoon assembly comprises a rigid inner spoon and a flexible outer spoon, wherein the flexible outer spoon is adapted to be fitted to an outer side of the rigid inner spoon, and the rigid inner spoon comprises an annular installation part and an inner spoon body extended from the installation part, and the installation part of the rigid inner spoon is detachably connected to an outer surface of the light guide barrel (Bebird “New Ear Scoop…ear tips, which is made of stainless steel and silicone, with suitable angles of tilt”).
Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over BEBIRD® R3 Ear Wax Removal Cleaner, 0.15 inch 1080P HD Ear Camera Lens with 6 LED Lights Intelligent Otoscope for iPhone, Android Phone(White) Amazon, 2021 (retrieved on 12/02/2024) <URL: https://www.amazon.com/BEBIRD®-Removal-0.15inch-Intelligent-Otoscope/dp/B097GN36GC> (Year: 2021) – “Bebird”, in view of Bebird R3 Smart Visual Ear Cleaner Instruction Manual, https://manuals.plus/ae/1005007326707919, Year: 2021 – “Bebird R3 Instruction manual”), Ahmed et al. (US PGPUB 2021/0142492 – “Ahmed”), Hara et al. (US Patent 7,057,645 – “Hara”), Agusanto et al. (US PGPUB 2007/0236514 – “Agusanto”), and Berbee et al. (US PGPUB 2020/0268241 – “Berbee”).
Regarding Claim 10, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teach the features of Claim 9, as described above.
Bebird further discloses wherein the camera element comprises a lens end surface, (Bebird page 2, showing details of Bebird ear tips and orientation, including wherein the camera element comprises a lens end surface.
Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto does not explicitly teach wherein a tip end of the flexible outer spoon is extended to and aligned with a center of the lens end surface of the camera.
Berbee teaches wherein a tip end of the flexible outer spoon is extended to and aligned with a center of the lens end surface of the camera (Berbee FIG. 1, showing scoop 31 of distal speculum tip 11 aligned with camera 18 in otoscope 16.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to utilize Berbee’s scoop/camera orientation with the method taught by Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of an otoscope having a scoop that visible to the central area of the camera, in order to align the scoop with a target of interest for removing wax therefrom.
Regarding Claim 11, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, Agusanto, and Berbee teaches the features of Claim 10, as described above.
Bebird further discloses wherein the light source comprises a plurality of light emitting elements arranged around the camera (Bebird FIG. 1), wherein the plurality of light emitting elements comprise two middle light emitting elements, wherein center positions of the two middle light emitting elements, the center of the lens end surface of the camera, and the center position of the tip end of the flexible outer spoon are located in a same plane (Bebird FIG. 1, a dashed line depicting a plane on which the two middle light emitting elements, the center of the lens end surface of the camera, and the center position of the tip end of the flexible outer spoon are located).
Claims 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over BEBIRD® R3 Ear Wax Removal Cleaner, 0.15 inch 1080P HD Ear Camera Lens with 6 LED Lights Intelligent Otoscope for iPhone, Android Phone(White) Amazon, 2021 (retrieved on 12/02/2024) <URL: https://www.amazon.com/BEBIRD®-Removal-0-15inch-Intelligent-Otoscope/dp/B097GN36GC> (Year: 2021) – “Bebird” in view of Bebird R3 Smart Visual Ear Cleaner Instruction Manual, https://manuals.plus/ae/1005007326707919, Year: 2021 – “Bebird R3 Instruction manual”), Ahmed et al. (US PGPUB 2021/0142492 – “Ahmed”), Hara et al. (US Patent 7,057,645 – “Hara”), Agusanto et al. (US PGPUB 2007/0236514 – “Agusanto”), Bedingham et al. (US PGPUB 2013/0211265 – “Bedingham”), and Carls et al. (US PGPUB 2010/0069919 – “Carls”).
Regarding Claim 12, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teach the features of Claim 1, as described above.
Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto does not explicitly teach wherein the attitude sensor is built-in with the ear cleaning arrangement not only to determine the attitude of the ear spoon assembly but also to guide the ear cleaning arrangement to move along the movement direction to reach the target.
Bedingham teaches wherein the attitude sensor is built-in with the ear cleaning arrangement to determine the attitude of the ear spoon assembly (Bedingham FIG. 3, control unit 350 of multifunctional medical device 300; see also multifunctional medical device 100 in Bedingham FIG. 1; Bedingham paragraph[0040], “control unit 350 may include at least one of an accelerometer, a gyroscope”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Bedingham’s attitude sensor with the ear cleaning arrangement disclosed by Bebird in the ear cleaning arrangement taught by Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of an ear cleaning arrangement that has position sensors within the insertion portion, thus enabling reorientation of displayed images (see Bedingham paragraph [0047]).
Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, Agusanto, and Bedingham does not explicitly teach the attitude sensor is used to guide the ear cleaning arrangement to move along the movement direction to reach the target.
Carls teaches the attitude sensor is used to guide the ear cleaning arrangement to move along the movement direction to reach the target (Carls FIG. 2, guidance system 500; Carls FIG. 6, motion sensor 506; Carls paragraph [0054], “Motion sensor 506 can include any one or combination of accelerometers, optical sensors, electromagnetic sensors, radio-frequency emitters, and angular sensors, for example. Motion sensor 506 provides data regarding at least one of the direction and distance of movement of connecting element 200c, along with the relative positional data between connecting element 200c and the target location of one or more of anchors 300a, 300b determined according to the tracking and targeting devices discussed hereinabove. The motion data and relative positional data can be sent to processor subsystem 520 and used to calculate an actual insertion path”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Carls’s guidance system 500 with the method taught by Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, Agusanto, and Bedingham. A person having ordinary skill in the art would be motivated to combine these
prior art elements according to known methods to yield the predictable result of a method that guides an ear cleaning to the earwax target.
Regarding Claim 13, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teaches the features of Claim 1, as described above.
Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto does not explicitly teach wherein the attitude sensor comprises a three-axis gyroscope and a three-axis accelerometer for acquiring the attitude information of the ear cleaning arrangement and for transmitting the attitude information to the one or more processors, and for guiding the ear cleaning arrangement to move along the movement direction to reach the target.
Bedingham teaches wherein the attitude sensor comprises a three-axis gyroscope and a three-axis accelerometer for acquiring the attitude information of the ear cleaning arrangement and for transmitting the attitude information to the one or more processors (Bedingham FIG. 3, control unit 350 of multifunctional medical device 300; see also multifunctional medical device 100 in Bedingham FIG. 1; Bedingham paragraph[0040], “control unit 350 may include at least one of an accelerometer, a gyroscope”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Bedingham’s attitude sensor with the ear cleaning arrangement disclosed by Bebird in the ear cleaning arrangement taught by Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of an ear cleaning arrangement that has position sensors within the insertion portion, thus enabling reorientation of displayed images (see Bedingham paragraph [0047]).
Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, Agusanto, and Bedingham does not explicitly teach the attitude sensor is used to guide the ear cleaning arrangement to move along the movement direction to reach the target.
Carls teaches the attitude sensor is used to guide the ear cleaning arrangement to move along the movement direction to reach the target (Carls FIG. 2, guidance system 500; Carls FIG. 6, motion sensor 506; Carls paragraph [0054], “Motion sensor 506 can include any one or combination of accelerometers, optical sensors, electromagnetic sensors, radio-frequency emitters, and angular sensors, for example. Motion sensor 506 provides data regarding at least one of the direction and distance of movement of connecting element 200c, along with the relative positional data between connecting element 200c and the target location of one or more of anchors 300a, 300b determined according to the tracking and targeting devices discussed hereinabove. The motion data and relative positional data can be sent to processor subsystem 520 and used to calculate an actual insertion path”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Carls’s guidance system 500 with the method taught by Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, Agusanto, and Bedingham. A person having ordinary skill in the art would be motivated to combine these
prior art elements according to known methods to yield the predictable result of a method that guides an ear cleaning to the earwax target.
Regarding Claim 14, Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto teach the features of Claim 1, as described above.
Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto does not explicitly teach wherein the attitude information includes information of current attitude angle, current rotation angle, and azimuth angle of the ear spoon assembly of the ear cleaning arrangement not only to determine the attitude of the ear spoon assembly but also to guide the ear cleaning arrangement to move along the movement direction to reach the target.
Bedingham teaches wherein the attitude information includes information of current attitude angle, current rotation angle, and azimuth angle of the ear spoon assembly of the ear cleaning arrangement to determine the attitude of the ear spoon assembly (Bedingham FIG. 3, control unit 350 of multifunctional medical device 300; see also multifunctional medical device 100 in Bedingham FIG. 1; Bedingham paragraph[0040], “control unit 350 may include at least one of an accelerometer, a gyroscope”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Bedingham’s attitude sensor with the ear cleaning arrangement disclosed by Bebird in the ear cleaning arrangement taught by Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, and Agusanto. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of an ear cleaning arrangement that has position sensors within the insertion portion, thus enabling reorientation of displayed images (see Bedingham paragraph [0047]).
Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, Agusanto, and Bedingham does not explicitly teach the attitude sensor is used to guide the ear cleaning arrangement to move along the movement direction to reach the target.
Carls teaches the attitude sensor is used to guide the ear cleaning arrangement to move along the movement direction to reach the target (Carls FIG. 2, guidance system 500; Carls FIG. 6, motion sensor 506; Carls paragraph [0054], “Motion sensor 506 can include any one or combination of accelerometers, optical sensors, electromagnetic sensors, radio-frequency emitters, and angular sensors, for example. Motion sensor 506 provides data regarding at least one of the direction and distance of movement of connecting element 200c, along with the relative positional data between connecting element 200c and the target location of one or more of anchors 300a, 300b determined according to the tracking and targeting devices discussed hereinabove. The motion data and relative positional data can be sent to processor subsystem 520 and used to calculate an actual insertion path”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Carls’s guidance system 500 with the method taught by Bebird in view of Bebird R3 Instruction manual, Ahmed, Hara, Agusanto, and Bedingham. A person having ordinary skill in the art would be motivated to combine these
prior art elements according to known methods to yield the predictable result of a method that guides an ear cleaning to the earwax target.
Response to Arguments
Applicant’s arguments, see pages 8-16, filed December 29, 2025, with respect to the rejection(s) of Claims 1-12 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Bebird R3 Smart Visual Ear Cleaner Instruction Manual, https://manuals.plus/ae/1005007326707919, Year: 2021 – “Bebird R3 Instruction manual”).
Regarding Claim 1, on pages 9-10 Applicant asserts that the cited prior art does not teach before inserting the ear spoon assembly into an ear canal of a user to move the camera element therewithin, selecting a frame from the video stream collected by the camera element to be recorded as a recorded image and recognizing a target captured in the selected frame to be processed by analyzing the recorded image. This feature is found in newly-cited Bebird R3 Instruction manual, as described above.
On page 10, Applicant states that Bebird does not address attitude information. Examiner notes that Ahmed and/or Agusanto and/or Bedingham, not Bebird, is cited for teaching this feature.
On page 10, Applicant states that Bebird does not include a processor for analyzing images. Examiner notes that this feature is not claimed. Rather, Claim 1 claims “processors for processing image information”, such as processing image information for display. Claim 1 does not claim a processor otherwise analyzing images. As claimed, the analyzing steps as claimed in Claim 1 can be performed by the operator.
On pages 10-11, Applicant states that Bebird does not teach guiding a movement direction to the target. This feature is found in newly-cited Bebird R3 Instruction manual, as described above in the rejection of Claim 1.
Regarding Claim 2, Applicant states on page 11 that Bebird does not teach setting a path to target earwax. This feature is found in newly-cited Bebird R3 Instruction manual, as described above in the rejection of Claim 2.
Regarding Claim 3, Applicant asserts on page 11 that the previously cited art does not teach targeting an otic area that is outside of the ear canal. This feature is found in newly-cited Bebird R3 Instruction manual, as described above in the rejection of Claim 3.
Regarding Claim 10, Applicant states on page 12 that the previously cited art does not teach a tip end of the flexible outer spoon is extended to and aligned with a center of the lens and surface of the camera. This feature is found in newly-cited Berbee, as described above in the rejection of Claim 10.
Regarding Claim 12, Applicant states on page 12 that Bebird fails to teach the attitude sensor is built-in with the ear cleaning arrangement not only to determine the attitude of the ear spoon assembly but also to guide the ear cleaning arrangement to move along the movement direction to reach the target. These features are cited in Bedingham and Carls, as described above in the rejection of Claim 12.
Regarding Claim 13, Applicant states on page 13 that Bebird fails to teach the attitude sensor comprises a three-axis gyroscope and a three-axis accelerometer for guiding the ear cleaning arrangement to move along the movement direction to reach the target. This feature is cited in Bedingham and Carls, as described above in the rejection of Claim 13.
Regarding Claim 14, Applicant states on page 13 that Bebird fails to teach the attitude sensor comprises a three-axis gyroscope and a three-axis accelerometer for acquiring the attitude information of the ear cleaning arrangement and for transmitting the attitude information to the one or more processors, and for guiding the ear cleaning arrangement to move along the movement direction to reach the target. These features are cited in Bedingham and Carls, as described above in the rejection of Claim 14.
On pages 13-16, Applicant states that it would not be obvious to combine one or more of Bebird, Ahmed, Hara, Agusanto, and/or Bedingham, and then cites this art in paragraphs 10-15.
In paragraph 10, Applicant asserts that Ahmed does not disclose any movement direction as a guide to reach the target, nor does Ahmed’s Inertial Measurement Unit analyze image data to generate a movement direction. First, Carls, not Ahmed, is not cited for guiding the ear cleaning arrangement. Second, as described above in response to the arguments against the rejection of Claim 1, the claims do not claim a processor or other logic guiding movement.
In paragraph 11, Applicant asserts that the device Hara does not analyze image data to generate movement direction to reach the target. As described above in response to the arguments against the rejection of Claim 1, the claims do not claim a processor or other logic guiding movement.
In paragraph 12, Applicant asserts that Agusanto does not analyze image data to generate a movement direction to reach the target. As described above in response to the arguments against the rejection of Claim 1, the claims do not claim a processor or other logic guiding movement.
In paragraph 13, Applicant asserts that the combined structure of Bebird, Ahmed, Hara, and Agusanto will fail to teach that "by incorporating with an attitude sensor, the processor is able to analyze the selected image and to generate a movement direction to the earwax in order to guide the ear cleaning arrangement to move along the movement direction to reach the earwax after inserting the ear spoon assembly into an ear canal”. Applicant is unable to determine which claim is being referenced in this argument. However, as described above in response to the arguments against the rejection of Claim 1, the claims do not claim a processor or other logic guiding movement.
In paragraph 14, Applicant asserts that the cited prior art fails to teach or suggest "by incorporating with an attitude sensor, the processor is able to analyze the selected image and to generate a movement direction to the entrance of the ear canal in order to guide the ear cleaning arrangement to move along the movement direction to reach the entrance of the ear canal before inserting the ear spoon assembly into an ear canal”. Applicant is unable to determine which claim is being referenced in this argument. However, as described above in response to the arguments against the rejection of Claim 1, the claims do not claim a processor or other logic guiding movement.
In paragraph 15, Applicant asserts that Bedingham does not teach a control unit to guide the ear cleaning arrangement to move along the movement direction to reach the target. Applicant is unable to determine which claim is being referenced in this argument. However, as described above in response to the arguments against the rejection of Claim 1, the claims do not claim a processor or other logic guiding movement.
In paragraph 16, Applicant asserts that no individual cited prior art teaches an ear cleaning arrangement having a built-in attitude sensor. Examiner notes that the claim(s) that include the feature of an attitude sensor are used as teaching art under 35 U.S.C. 103, and not 35 U.S.C. 102. Examiner further notes that Applicant’s statement that “No one in the art of ear cleaning industry modifies the ear cleaning device with the attitude sensor” is conclusory, irrelevant, and/or not supported by the prosecution record.
In paragraph 17, Applicant asserts that cited prior art fails to show incorporating a build-in attitude sensor in an ear cleaning arrangement. Examiner notes that the combination shown above in the rejection of exemplary Claim 1 describes the motivation for combining these elements. Despite a typographical error in paragraph 17, Applicant appears to state that the cited prior art fail to show how to incorporate the attitude sensor with the ear cleaning arrangement. Examiner is unaware of any requirement in the MPEP for the examiner to provide art that provides directions/instructions on how to combine prior art elements. Rather, if motivation is explained as shown above, and there is no violation of requirements found in the MPEP (e.g., MPEP2143.01(V)), then there is a presumption that a PHOSITA would be able to combine the cited components (see MPEP 2141.03(I), “KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 421, 82 USPQ2d 1385, 1397 (2007). "[I]n many cases a person of ordinary skill will be able to fit the teachings of multiple patents together like pieces of a puzzle." Id. at 420, 82 USPQ2d 1397.”).
In paragraph 18, Applicant asserts that the examiner is required to show a motivation to combine references. As Applicant has not disputed such a demonstration for any particular rejection, Examiner is unable to respond other than to direct the reader’s attention to the descriptions for motivation to combined in the rejections presented above.
In paragraphs 19-21, Applicant reiterates an argument that the cited prior art does not teach or suggest the features found in the present amendments, without offering any specificity or further arguments.
As such, the rejection of Claims 1-14 under 35 U.S.C. 103 is maintained.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIM BOICE whose telephone number is (571)272-6565. The examiner can normally be reached Monday-Friday 9:00am - 5:00pm Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anhtuan Nguyen can be reached at (571)272-4963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JIM BOICE
Examiner
Art Unit 3795
/JAMES EDWARD BOICE/Examiner, Art Unit 3795
/ANH TUAN T NGUYEN/Supervisory Patent Examiner, Art Unit 3795
03/16/2026