DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on June 20, 2024 complies with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
35 USC § 101 Statutory Analysis
The claims do not recite any of the judicial exceptions enumerated in the 2019 Revised Patent Subject Matter Eligibility Guidance. Further, the claims do not recite any method of organizing human activity, such as a fundamental economic concept or managing interactions between people. Finally, the claims do not recite a mathematical relationship, formula, or calculation. Thus, the claims are eligible because they do not recite a judicial exception.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. §102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-6 and 9-15 are rejected under 35 U.S.C. §102(a)(1) as being anticipated by Shani et al. (U.S. Patent Application Publication No. 2015/0107034 A1) (hereafter referred to as “Shani”).
With regard to claim 1, Shani describes a camera configured, in use, to capture images of one or more features of a user (see Figure 2A, element 18 and refer for example to paragraph [0040]); and a vibrator adapted to vibrate the personal care device so that, in use, the personal care device vibrates (see Figures 1D-F and refer for example to paragraphs [0035] and [0036]), and wherein the method comprises processing a captured image from the camera image with an edge detection algorithm to identify a region of the captured image comprising image data having an edge quality value meeting a first predetermined requirement (refer for example to paragraphs [0041] and [0068], the edge detection algorithm performs advanced picture processing algorithms on the received each image starting from image stabilization and stitching algorithms to compensate for jitter and also eliminates non relevant areas which reasonably meets the claimed requirement that “an edge quality value meets a first predetermined requirement”); extracting image data from the identified region of the captured image (refer for example to paragraphs [0041] and [0068]); and generating a reconstruction image comprising the extracted image data (refer for example to paragraphs [0041] and [0068]).
As to claim 2, Shani describes wherein the personal care device comprises an oral care device, and wherein the camera is configured, in use, to capture images of one or more oral features of the user (see Figures 1A-C and refer to paragraph [0034], which shows, and see Figure 2A, element 18 and refer for example to paragraph [0040] which describes capturing images of one or more oral features of the user).
In regard to claim 3, Shani describes obtaining, as a reference image, an image captured by the camera, and wherein the first predetermined requirement is that the edge quality value of the image data of the identified region is greater than an edge quality value of image data of a corresponding region of the reference image (refer for example to paragraph [0066], the edge detection algorithm performs advanced picture processing algorithms on the received each image starting from image stabilization and stitching algorithms to compensate for jitter and also eliminates non relevant areas which reasonably meets the claimed requirement that “the edge quality value of the image data of the identified region is greater than an edge quality value of image data of a corresponding region of the reference image”).
With regard to claim 4, Shani describes replacing image data of the reference image with image data of the reconstruction image (refer to paragraph [0066], the processor updates the image data with specific coordinates and locations from the current captured images).
As to claim 5, Shani describes storing the extracted image data from the identified region in a corresponding region of the reference image (refer to paragraph [0066], the processor updates the image data with specific coordinates and locations from the current captured images).
In regard to claim 6, Shani describes wherein the first predetermined requirement is that the edge quality value of the image data of the identified region is greater than a predetermined edge quality threshold value (refer for example to paragraphs [0041] and [0068], the edge detection algorithm performs advanced picture processing algorithms on the received each image starting from image stabilization and stitching algorithms to compensate for jitter and also eliminates non relevant areas which reasonably meets the claimed requirement that “the edge quality value of the image data of the identified region is greater than a predetermined edge quality threshold value”).
As to claim 9, Shani describes wherein generating the reconstruction image comprises storing the extracted image data from the identified region in a corresponding region of the reconstruction image (refer for example to paragraph [0086]).
With regard to claim 10, Shani describes a computer program comprising computer program code which is adapted to implement the method of claims 1 (see Figure 6, element 54 and refer for example to paragraph [0077] and paragraph [0087]).
As to claim 11, Shani describes a camera adapted, in use, to capture images of one or more features of a user (see Figure 2A, element 18 and refer for example to paragraph [0040]); and a vibrator adapted to vibrate the personal care device so that, in use, the personal care device vibrates with a vibration cycle having a vibration frequency (see Figures 1D-F and refer for example to paragraphs [0035] and [0036]); and a system for processing captured images, comprising a processor (see Figure 6, element 54 and refer for example to paragraph [0077] and paragraph [0087]) configured to process a captured image from the camera with an edge detection algorithm to identify a region of the captured image comprising image data having an edge quality value meeting a first predetermined requirement (refer for example to paragraphs [0041] and [0068], the edge detection algorithm performs advanced picture processing algorithms on the received each image starting from image stabilization and stitching algorithms to compensate for jitter and also eliminates non relevant areas which reasonably meets the claimed requirement that “an edge quality value meets a first predetermined requirement”); and an image processor (see Figure 6, element 54 and refer for example to paragraph [0077] and paragraph [0087]) configured to extract image data from the identified region of the captured image and to generate a reconstruction image comprising the extracted image data (refer for example to paragraphs [0041] and [0068]).
In regard to claim 12, Shani describes an interface configured to obtain, as reference image, an image captured by the camera (refer for example to paragraph [0066], the processor updates the image data with specific coordinates and locations from the current captured images), and wherein the first predetermined requirement is that the edge quality value of the image data of the identified region is greater than an edge quality value of image data of a corresponding region of the reference image (refer for example to paragraphs [0041] and [0068], the edge detection algorithm performs advanced picture processing algorithms on the received each image starting from image stabilization and stitching algorithms to compensate for jitter and also eliminates non relevant areas which reasonably meets the claimed requirement that “the first predetermined requirement is that the edge quality value of the image data of the identified region is greater than an edge quality value of image data of a corresponding region of the reference image”).
With regard to claim 13, Shani describes wherein the image processor is further configured to replace image data of the reference image with image data of the reconstruction image (refer to paragraph [0066], the processor updates the image data with specific coordinates and locations from the current captured images).
As to claim 14, Shani describes wherein generating the reconstruction image comprises storing the extracted image data from the identified region in a corresponding region of the reconstruction image (refer for example to paragraph [0086]).
In regard to claim 15, Shani describes wherein the personal care device is a toothbrush (see Figures 1A-C and refer for example to paragraph [0034]).
Allowable Subject Matter
Claims 7 and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Vetter, Subhash, Zhang, Wu, Hu, Wang, Xiong, Zhang and Li all disclose systems similar to applicant’s claimed invention.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jose L. Couso whose telephone number is (571) 272-7388. The examiner can normally be reached on Monday through Friday from 5:30am to 1:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached on 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Center information webpage on the USPTO website. For more information about the Patent Center, see https://www.uspto.gov/patents/apply/patent-center. Should you have questions about access to the Patent Center, contact the Patent Electronic Business Center (EBC) at 571-272-4100 or via email at: ebc@uspto.gov .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
/JOSE L COUSO/Primary Examiner, Art Unit 2667
February 24, 2026