DETAILED ACTION
Reissue
The present reissue application is directed to US 10,322,249 B2 (“249 Patent”). 249 Patent issued on June 25, 2019 with claims 1-19 from application 15/452,707 filed on March 7, 2017 and claims priority to provisional application 62/304,729 filed on March 7, 2016.
This application was filed on December 20, 2024. Since this date is after September 16, 2012, all references to 35 U.S.C. 251 and 37 CFR 1.172, 1.175, and 3.73 are to the current provisions. Furthermore, the present application is being examined under the first inventor to file provisions of the AIA .
This application is a continuation reissue of reissue application 17/357,106 (now US RE50,340 E).
This application presents broadened claims, which are permitted because Applicant filed these claims and demonstrated an intent to broaden within two years of the issue date of 249 Patent (see claims filed on June 24, 2021 in 17/357,106).
The most recent amendment was filed on December 20, 2024. The current status of the claims is as follows, although claims 1-11 and 19 will need to be canceled and presented as additional new claims because this application is a continuation reissue (see 35 U.S.C. 251 and 35 U.S.C. 112 rejections below in this action):
Claims 1, 3, 5, 6, and 19: Amended
Claims 2, 4, and 7-11: Original
Claims 12-18: Canceled
Claims 20-42: New
This is a first, non-final action.
References and Documents Cited in this Action
249 Patent (US 10,332,249 B2)
US RE50,340 E (reissue of 249 Patent)
US 10,810,732 B2 (continuation of 249 Patent)
Nguyen 685 (US 2016/0019685 A1)
Sobczak (US 2014/0254862 A1)
Bowles 611 (US 2013/00466611 A1)
Onishi (JP 2010-66186 A)
Summary of Rejections and Objections in this Action
Claims 1-11 and 19 are rejected under 35 U.S.C. 251.
Claims 1-11 and 19 are rejected under 35 U.S.C. 112(b).
Claims 20, 21, 30-35, 38-41, and 42 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen 685 in view of Sobczak.
Claims 22-29 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen 685 in view of Sobczak as applied to claim 20 above, and further in view of Bowles 611.
Claims 36 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen 685 in view of Sobczak as applied to claim 20 above, and further in view of Onishi.
Claims 1-11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen 685 in view of Sobczak, Bowles 611, and Onishi
Claims 20-22, 24, and 42 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of U.S. Patent No. 10,810,732 B2.
Claims 20, 22, 24, 25, and 36 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 20, 23, 24, 26, 29 of US RE50,340 E, respectively, in view of Sobczak.
Summary of the Claims
249 Patent is generally directed to a method to identify a condition of a screen of an electronic device including displaying graphics on the screen and using a camera of the electronic device to capture an image of the screen. Claim 20 is representative:
20. A method to identify a condition of a screen of an electronic device, the method comprising:
causing display of one or more first graphics on a screen of an electronic device, wherein the screen is on a first side of the electronic device;
capturing a first image of at least a portion of a first graphic of the one or more first graphics via a camera on the first side of the electronic device;
identifying the screen or an active portion thereof of the electronic device in the first image;
generating a second image corresponding to the identified screen or the active portion thereof by restricting one or more portions of the first image that are not identified as the screen or the active portion thereof from inclusion in the second image; and
processing one or more of the first image or the second image to determine a condition of at least a portion of the screen of the electronic device.
Claims 1, 19, 20, and 42 are the independent claims. Claim 1 recites a method similar to claim 20 and further including additional steps such as dividing the second image into parts. Claims 19 and 42 each recite a non-transitory computer readable medium comprising instructions for a computer processor to execute a method corresponding to claims 1 and 20 respectively.
Claim Rejections - 35 USC § 251
Claims 1-11 and 19 are rejected under 35 U.S.C. 251, because the reissue application is not correcting an error in the original patent. Claims 1-11 and 19 of 249 Patent have been superseded by the previous reissue US RE50,340 E. Once a claim in the patent has been reissued, it does not exist in the original patent; thus, it cannot be reissued from the original patent in another reissue application. Applicant should cancel claims 1-11 and 19. The subject matter recited in claims 1-11 and 19 may be presented as additional new claims. See MPEP 1451 I for further details (the discussion therein of numbering claims in a divisional reissue application applies also to this continuation reissue).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-11 and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1-11 and 19 are indefinite because the inventions of claims 1-11 and 19 are not particularly pointed out and distinctly claimed. Claims 1-11 and 19 present one coverage in previous reissue US RE50,340 E and another in the present reissue application. This is inconsistent. Once a claim in the patent has been reissued, it does not exist in the original patent; thus, it cannot be reissued from the original patent in another reissue application. See MPEP 1451 I for further details (the discussion therein of numbering claims in a divisional reissue application applies also to this continuing reissue).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The 35 U.S.C. 103 rejections below rely on Nguyen 685 as the primary reference in view of various combinations of Bowles 611, Sobczak, and/or Onishi as secondary references. To summarize generally, Nguyen 685 discloses evaluating the condition of a screen of a device by presenting a second graphic on the screen and capturing images of a reflection of the device; Bowles 611 teaches presenting a first graphic comprising an identification code and capturing a first image of the first graphic; Sobczak teaches generating a third image by restricting/cropping the second image; and Onishi teaches analyzing an image to determine a condition of the object in the image by dividing the image into parts, and identifying parts and adjacent parts including damage.
Claims 20, 21, 30-35, 38-41, and 42 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen 685 in view of Sobczak.
Regarding independent claim 20, dependent claim 34, and independent claim 42, Nguyen 685 discloses a method to identify a condition of a screen of an electronic device (Abstract; Figure 1), the method comprising:
causing display of one or more first graphics on a screen of an electronic device, wherein the screen is on a first side of the electronic device (Figure 1; Nguyen 685 discloses the screen could “show a solid color or a static image” and “the screen could show a grid”; paragraph [0027]);
capturing a first image of at least a portion of a first graphic of the one or more first graphics via a camera on the first side of the electronic device (Figure 1; paragraphs [0024]-[0027]);
identifying the screen or an active portion thereof of the electronic device in the first image (in particular, Nguyen 685 discloses that the screen could display solid white to highlight the screen in the image of the device and “The further analysis step changes the visual parameters of the photo until the glowing device screen is visually distinct from the rest of the device”; paragraphs [0027] and [0034]); and
processing one first image to determine a condition of at least a portion of the screen of the electronic device (paragraphs [0030]-[0034]).
Regarding independent claim 42 in particular, Nguyen 685 discloses a non-transitory computer readable medium comprising instructions executed by a computer processor for performing the above operations (e.g., “an app installed on the electronic device itself”; paragraphs [0017])
Further regarding claims 20, 34, and 42, Nguyen 685 does not specifically disclose generating a second image corresponding to the identified screen or the active portion thereof by restricting one or more portions of the first image that are not identified as the screen or the active portion thereof from inclusion in the second image.
However, Sobczak teaches a method that is related to the method disclosed by Nguyen 685, including identifying a condition of an object (e.g., a belt) based on capturing and analyzing an image of that object (Sobczak, Abstract and paragraph [0002]; Figures 4 and 6). Sobczak further teaches generating a cropped image corresponding to the identified belt by restricting one or more portions of the initial image that are not identified as the belt from inclusion in the cropped image (Sobczak, paragraph [0078]). Regarding claim 34 in particular, Sobczak teaches that generating the cropped image comprises altering the initial image such that all portions of the initial image that are not identified as the desired object are removed (Sobczak, paragraph [0078]).
Regarding claims 20, 34, and 42, it would have been obvious to a person of ordinary skill in the art to generate a second (i.e., cropped) image corresponding to the identified screen or the active portion thereof by restricting one or more portions of the first image that are not identified as the screen or the active portion thereof from inclusion and altering the first image such that all portions of the first image that are not identified as the screen or the active portion thereof are removed, in the method and app disclosed by Nguyen 685 as taught by Sobczak in order to advantageously remove extraneous parts of the image and further facilitate the analysis of the device screen within the image.
Regarding claim 21, in the method taught by Nguyen 685 in view of Sobczak, Nguyen 685 discloses that the first image comprises a reflection of the at least the portion of the first graphic (i.e., Nguyen 685 discloses that the second image is an image of the reflection in mirror 110 of the device 100; Figure 1; paragraphs [0024]-[0027]).
Regarding claim 30, in the method taught by Nguyen 685 in view of Sobczak. Nguyen 685 further discloses that the first graphic is generated for a short period of time at least in the sense that Nguyen 685 discloses that a graphic (such as a grid) is temporarily shown for purposes of taking photos of the front of the device before the app proceeds to instruct the user to take additional photos of other sides of the device (paragraphs [0027]-[0028]). The claim does not recite further details with respect to the length of a “short” period of time.
Regarding claim 31, in the method taught by Nguyen 685 in view of Sobczak. Nguyen 685 further discloses determining that the first image is not a processable image (i.e., Nguyen 685 discloses that an app on the device takes photos “when the electronic device is correctly positioned” in front of the mirror, which inherently includes initially capturing images of an incorrectly positioned device and determining that the images are not processable; paragraph [0024]-[0025]);
causing the electronic device to be reoriented to capture a processable image by:
determining an orientation of the electronic device based on the first image (i.e., the device determines whether the device is oriented in a correct way or not by capturing the image of its reflection; paragraphs [0024]-[0026]; Figures 2A-B); and
providing guidance to adjust the orientation of the electronic device based on the determined orientation (i.e., Nguyen 685 discloses “instructing the user on correct positioning” and that the user can be instructed to manually take photos “when the electronic device is correctly positioned”; paragraph [0025]).
Regarding claim 32, in the method taught by Nguyen 685 in view of Sobczak, Nguyen 685 discloses restricting one or more processes of the electronic device at least in the sense that Nguyen 685 discloses disabling the usual display of the front camera view on the device screen (Nguyen 685, paragraph [0026]).
Regarding claim 33, in the method taught by Nguyen 685 in view of Sobczak, Nguyen 685 discloses identifying the screen or an active portion thereof of the electronic device in the first image as discussed above with regard to claim 20 but does not specifically disclose corner or edge detection. However, again, Sobczak teaches a method that is related to the method disclosed by Nguyen 685, including identifying a condition of an object (e.g., a belt) based on capturing and analyzing an image of that object (Sobczak, Abstract and paragraph [0002]; Figures 4 and 6). Sobczak further teaches using edge detection to identify the object within the image (Sobczak, Figure 7; paragraphs [0086]-[0094]). Regarding claim 33, it would have been obvious to a person of ordinary skill in the art to use edge detection as taught by Sobczak in the method taught by Nguyen 685 in view of Sobczak in order to effectively identify the device screen within the image to perform analysis of its condition.
Regarding claim 35, in the method taught by Nguyen 685 in view of Sobczak, Nguyen 685 discloses that identifying the screen or the active portion thereof comprises identifying the active area of the screen of the electronic device (e.g., the screen could display solid white to highlight the screen in the image of the device and “The further analysis step changes the visual parameters of the photo until the glowing device screen is visually distinct from the rest of the device”; paragraphs [0027] and [0034]).
Regarding claim 38, in the method taught by Nguyen 685 in view of Sobczak, Nguyen 685 discloses that the first graphic comprises one or more of a solid color display, a pattern, a photograph, or an identifier (e.g., “solid white” or “a grid”; paragraph [0027]).
Regarding claim 39, in the method taught by Nguyen 685 in view of Sobczak, Nguyen 685 discloses that the first image comprises the screen and an area proximate the screen of the electronic device. (i.e., the image of the electronic device comprises the screen and the rest of the front of the device including an area proximate the screen; Figures 1 and 2A-B; paragraph [0026]).
Regarding claim 40, in the method taught by Nguyen 685 in view of Sobczak, Nguyen 685 discloses that processing one or more of the first image or the second image to determine the condition of the at least the portion of the screen of the electronic device comprises analyzing one or more portions of the first image or the second image comprising one or more of the first graphics (“After the photos are taken, they are analyzed….”; paragraphs [0030]-[0034]).
Regarding claim 41, in the method taught by Nguyen 685 in view of Sobczak, Nguyen 685 discloses receiving a request for evaluation of the condition of the at least the portion of the screen of the electronic device, at least in the sense that a user of the method/app disclosed by Nguyen 685 “for performing a cosmetic evaluation of an electronic device” (including the screen) is inherently requesting the evaluation when the user uses the app.
Claims 22-29 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen 685 in view of Sobczak as applied to claim 20 above, and further in view of Bowles 611.
Regarding claims 22-24, Nguyen 685 in view of Sobczak teaches a method as discussed above with regard to claim 20 including analyzing a reflection of a graphic displayed on the device but does not teach causing display of a second graphic comprising a first identification code.
However, Bowles 611 teaches a method that is related to the one taught by Nguyen 685 in view of Sobczak, including identifying a condition of a screen of an electronic device 150 based on capturing and analyzing an image of the device (Bowles 611, Abstract; paragraph [0067]; Figures 7-9). Bowles 611 further teaches causing display of a second graphic on the screen of the electronic device (e.g., an “about page” or a serial number display; Figure 15; paragraphs [0017], [0021], and [0080]), wherein the second graphic comprises a first identification code (e.g., an “IMEI number or unique serial number”; paragraph [0021]); and analyzing the captured image of the second graphic (Figure 15, steps 2002 and 2003; paragraph [0080]; see also page 8, claims 8 and 10). Regarding claim 23 in particular, Bowles 611 teaches that analyzing the second graphic comprises analyzing the first identification code and verifying an identity of the electronic device based on the analysis of the first identification code (paragraph [0021]). Regarding claim 24 in particular, Bowles 611 teaches that analyzing the second graphic comprises capturing a third image of at least a portion of the second graphic and analyzing the third image (Figure 15, steps 2002 and 2003; paragraph [0080]).
Regarding claims 22-24, it would have been obvious to a person of ordinary skill in the art to display, capture, and analyze a second graphic comprising a first identification code as taught by Bowles 611 in the method taught by Nguyen 685 in view of Sobczak (wherein the graphic would be captured via a reflection of the device as disclosed by Nguyen 685) in order to advantageously confirm the identity and model number of the device (Bowles 611, paragraph [0021]).
Regarding claim 25, Nguyen 685 in view of Sobczak and Bowles 611 does not specifically teach determining an orientation of the electronic device based on the third image of the at least the portion of the second graphic.
However, as discussed above with regard to parent claims 20 and 22-24, Nguyen 685 in view of Sobczak and Bowles 611 teaches capturing a third image of a second graphic (i.e., an identification code as taught by Bowles 611) in combination with capturing a first image of a first graphic (i.e., a solid color or a grid as disclosed by Nguyen 685). Nguyen 685 further discloses that capturing the first image of the at least the portion of the first graphic comprises determining an orientation of the electronic device based on a feedback image (i.e., the device determines whether the device is oriented in a correct way or not by capturing the image of its reflection; paragraphs [0024]-[0026]; Figures 2A-B); and
providing guidance to adjust the orientation of the electronic device based on the determined orientation (i.e., Nguyen 685 discloses “instructing the user on correct positioning” and that the user can be instructed to manually take photos “when the electronic device is correctly positioned”; paragraph [0025]).
Regarding claim 25, it would have been obvious to a person of ordinary skill in the art to determine an orientation of the electronic device based on the third image of the at least the portion of the second graphic (instead of merely any image) in the method taught by Nguyen 685 in view of Sobczak and Bowles 611 in order to efficiently orient the device for the capture of the first image while confirming the identity of the device. In other words, Nguyen 685 already generally discloses guiding the user to adjust the orientation based on some initial image; and Bowles 611 teaches particularly capturing a third image of a second graphic (i.e., an identification code) in order advantageously identify the device. One of ordinary skill in the art would have been motivated to combine these teachings such that the capturing of the identification graphic provides the orientation feedback for further capturing the first image in order to improve efficiency.
Regarding claims 26 and 27, the method taught by Nguyen 685 in view of Sobczak and Bowles 611 includes capturing an image of at least a portion of the second graphic (e.g., identification information) as taught by Bowles 611 and capturing an image of at least a portion of the first graphic (e.g., a grid) as disclosed by Nguyen 685, wherein these images are reflections captured by the camera of the electronic device as disclosed by Nguyen 685 (see above discussions with regard to parent claims 20, 22, and 24). Nguyen 685 in view of Sobczak and Bowles 611 do not specifically teach capturing an additional image of the first or second graphics, but Nguyen 685 teaches capturing “a photo or photos” of the electronic device generally (Nguyen 685, paragraphs [0024]-[0025]). Regarding claims 26 and 27, it would have been obvious to a person of ordinary skill in the art to capture additional images of the first and second graphics in the method taught by Nguyen 685 in view of Sobczak and Bowles 611 in order to advantageously more accurately analyze the identity and/or condition of the device. Such additional images would merely yield a predictable result of providing further information for analysis, particularly since Nguyen 685 already generally teaches capturing multiple images.
Regarding claim 28, in the method taught by Nguyen 685 in view of Sobczak and Bowles 611 (including a third image with identification information as taught by Bowles 611), Nguyen 685 further discloses tagging the first image with identification information (see Nguyen 685, paragraph [0029]). In the method taught by Nguyen 685 in view of Sobczak and Bowles 611, this tagging is tagging “with at least a portion of the third image,” since the identification information comes from the first image as taught by Bowles 611.
Regarding claim 29, Nguyen 685 in view of Sobczak and Bowles 611 does not specifically teach automatically generating the first graphic based on the determination that the first identification code of the second graphic is in focus.
However, as discussed above with regard to parent claims 20 and 22-24, Nguyen 685 in view of Sobczak and Bowles 611 teaches capturing a third image of a second graphic (i.e., an identification code as taught by Bowles 611) in combination with displaying a first graphic and capturing a first image of the first graphic (i.e., a solid color or a grid as disclosed by Nguyen 685).
Nguyen 685 further discloses that capturing the first image of the first graphic comprises determining if a feedback image is in focus (i.e., the device determines whether the device is ready to correctly capture images by recognizing that the images are within the distance for correct focus; paragraphs [0016] and [0024]-[0026]; Figures 2A-B); and
providing guidance to adjust the electronic device based on the determined focus position (i.e., Nguyen 685 discloses “instructing the user on correct positioning” and that the user can be instructed to manually take photos “when the electronic device is correctly positioned”; paragraph [0025]).
Regarding claim 29, it would have been obvious to a person of ordinary skill in the art to determine that the first identification code of the second graphic is in focus in the third image (instead of merely any image) in the method taught by Nguyen 685 in view of Sobczak and Bowles 611 in order to efficiently ready the device for the capture of the first image while confirming the identity of the device. In other words, Nguyen 685 already generally discloses guiding the user to adjust the focus based on some initial image; and Bowles 611 teaches particularly capturing a third image of a second graphic (i.e., an identification code) in order advantageously identify the device. One of ordinary skill in the art would have been motivated to combine these teachings such that the capturing of the identification graphic provides the focus feedback for further generating the first graphic and capturing the first image of the first in order to improve efficiency.
Claims 36 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen 685 in view of Sobczak as applied to claim 20 above, and further in view of Onishi.
Regarding claims 36 and 37, Nguyen 685 in view of Sobczak teaches a method as discussed above with regard to claim 20, including capturing a first image, generating a second image, and processing one or more of the first image or the second image to determine a condition of at least a portion of the screen of the electronic device. Nguyen 685 in view of Sobczak do not specifically teach that the processing comprises dividing the image into parts, determining if the parts include damage, identifying parts adjacent to the parts that have damage, and determining the condition of the screen based on whether the parts are determined to include damage and whether the parts adjacent also include damage.
However, Onishi teaches a method that is related to the one taught by Nguyen 685 in view of Sobczak, including identifying a condition of an object (e.g., a semiconductor substrate) based on capturing and analyzing an image of that object (Onishi, Abstract; Figures 2A-B). Onishi further discloses dividing the image into parts (i.e., regions 12 and sub-regions 22; Figures 2A-B);
determining whether one or more of the parts of the image include damage (e.g., defect 13-1; Figures 2A-B);
identifying parts adjacent to one or more of the parts that include damage (e.g., adjacent sub-regions include additional defects 13-2 and 13-3; Figures 2A-B); and
determining the condition of the object based on whether one or more of the parts of the image are determined to include damage and whether one or more of the parts adjacent to one of the parts determined to include damage also includes damage (i.e., the defects 13-1 to 13-3 are “reported as one defect information 24”; see corresponding description of Figures 2A-B in Onishi).
Regarding claims 36 and 37, it would have been obvious to a person of ordinary skill in the art to divide the first or second image into parts and determine the condition of the imaged object based on parts and adjacent parts including damage as taught by Onishi in the method taught by Nguyen 685 in view of Sobczak, in order to group various pixels with an incorrect appearance in the image into separate defects in the screen that can be counted and analyzed. Nguyen 685 already discloses counting cracks and scratches on the screen as part of evaluating the condition of the device (Nguyen 685, paragraphs [0011] and [0013]).
Claims 1-11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Nguyen 685 in view of Sobczak, Bowles 611, and Onishi
Regarding independent claims 1 and 19, as well as the claims may be understood with respect to 35 U.S.C. 112(b), Nguyen 685 discloses a method to identify a condition of one or more screens of a first device (Abstract, Figure 1), the method comprising:
receiving a request for evaluation of a condition of a screen of a first device or portion thereof via a return application on the first device (i.e., a user requests an evaluation by using “an app installed on the electronic device itself”; paragraphs [0017]);
causing presentation of one or more second graphics on the screen of the first device (Figure 1; Nguyen 685 discloses the screen could “show a solid color or a static image” and “the screen could show a grid”; paragraph [0027]);
capturing at least a portion of one or more of second images of at least one of the second graphics via a camera of the first device, wherein each of the second images comprises a reflection of at least one of the second graphics on the reflective surface (Nguyen 685 discloses that the second image is an image of the reflection in mirror 110 of the device 100; Figure 1; paragraphs [0024]-[0027]); and
processing one or more of the second images to determine a condition of the screen of the first device (paragraphs [0030]-[0034]).
Regarding independent claim 19 in particular, Nguyen 685 discloses a non-transitory computer readable medium comprising instructions executed by a computer processor for performing the above operations (e.g., “an app installed on the electronic device itself”; paragraphs [0017]).
Further regarding claims 1 and 19, Nguyen 685 does not specifically disclose causing presentation of a first graphic comprising a first identification code. However, Bowles 611 teaches a method that is related to the one disclosed by Nguyen 685, including identifying a condition of a screen of an electronic device 150 based on capturing and analyzing an image of the device (Bowles 611, Abstract; paragraph [0067]; Figures 7-9). Bowles 611 further teaches causing presentation of a first graphic on the screen of the electronic device (e.g., an “about page” or a serial number display; Figure 15; paragraphs [0017], [0021], and [0080]), wherein the first graphic comprises a first identification code (e.g., an “IMEI number or unique serial number”; paragraph [0021]); and capturing at least a portion of a first image of the first graphic (Figure 15, steps 2002 and 2003; paragraph [0080]; see also page 8, claims 8 and 10).
Regarding claims 1 and 19, it would have been obvious to a person of ordinary skill in the art to present and capture a first graphic comprising a first identification code as taught by Bowles 611 in the method disclosed by Nguyen 685 (wherein the graphic would be captured via a reflection of the device as disclosed by Nguyen 685) in order to advantageously confirm the identity and model number of the device (Bowles 611, paragraph [0021]).
Further regarding claims 1 and 19, Nguyen 685 in view of Bowles 611 does not specifically teach generating a third image in which portions of the second image that are not identified as the screen or portion thereof in the second image are restricted from inclusion in the third image. However, Sobczak teaches a method that is related to the method taught by Nguyen 685 in view of Bowles 611, including identifying a condition of an object (e.g., a belt) based on capturing and analyzing an image of that object (Sobczak, Abstract and paragraph [0002]; Figures 4 and 6). Sobczak further teaches generating a cropped image corresponding to the identified belt by restricting one or more portions of the initial image that are not identified as the belt from inclusion in the cropped image (Sobczak, paragraph [0078]).
Regarding claims 1 and 19, it would have been obvious to a person of ordinary skill in the art to generate a third (i.e., cropped) image corresponding to the identified screen or portion thereof by restricting portions of the second image that are not identified as the screen or the portion thereof from inclusion, as taught by Sobczak in the method and app taught by Nguyen 685 in view of Bowles 611 in order to advantageously remove extraneous parts of the image and further facilitate the analysis of the device screen within the image.
Further regarding claims 1 and 19, Nguyen 685 in view of Bowles 611 and Sobczak teach capturing a second image, generating a third image, and processing one or more of the second image or the third image to determine a condition of at least a portion of the screen of the electronic device as discussed above. Nguyen 685 in view of Bowles 611 and Sobczak do not specifically teach that the processing comprises dividing the image into parts, determining if the parts include damage, identifying parts adjacent to the parts that have damage, and determining the condition of the screen based on whether the parts are determined to include damage and whether the parts adjacent also include damage.
However, Onishi teaches a method that is related to the one taught by Nguyen 685 in view of Bowles 611 and Sobczak, including identifying a condition of an object (e.g., a semiconductor substrate) based on capturing and analyzing an image of that object (Onishi, Abstract; Figures 2A-B). Onishi further discloses dividing the image into parts (i.e., regions 12 and sub-regions 22; Figures 2A-B);
determining whether one or more of the parts of the image include damage (e.g., defect 13-1; Figures 2A-B);
identifying parts adjacent to one or more of the parts that include damage (e.g., adjacent sub-regions include additional defects 13-2 and 13-3; Figures 2A-B); and
determining the condition of the object based on whether one or more of the parts of the image are determined to include damage and whether one or more of the parts adjacent to one of the parts determined to include damage also includes damage (i.e., the defects 13-1 to 13-3 are “reported as one defect information 24”; see corresponding description of Figures 2A-B in Onishi).
Regarding claims 1 and 19, it would have been obvious to a person of ordinary skill in the art to divide the second and third image into parts and determine the condition of the imaged object based on parts and adjacent parts including damage as taught by Onishi in the method taught by Nguyen 685 in view of Bowles 611 and Sobczak, in order to group various pixels with an incorrect appearance in the image into separate defects in the screen that can be counted and analyzed. Nguyen 685 already discloses counting cracks and scratches on the screen as part of evaluating the condition of the device (Nguyen 685, paragraphs [0011] and [0013]).
Regarding claim 2, in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi, Bowles 611 further teaches verifying an identity of the electronic device based on the analysis of the first identification code (Bowles 611, paragraph [0021]). It would have been obvious to a person of ordinary skill in the art verifying an identity of the electronic device based on the analysis of the first identification code as taught by Bowles 611 in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi in order to advantageously confirm the identity and model number of the device (Bowles 611, paragraph [0021]).
Regarding claims 3 and 5, Nguyen 685 in view of Bowles 611, Sobczak, and Onishi teach a method as discussed above with regard to claim 1, including capturing a first image of a first graphic (i.e., an identification code as taught by Bowles 611) in combination with capturing a second image of a second graphic (i.e., a solid color or a grid as disclosed by Nguyen 685) but does not specifically teach determining an orientation of the first device based on the captured image of the first graphic; and providing guidance to adjust an orientation of the first device based on the determined orientation.
However, Nguyen 685 further discloses that capturing the second image of the second graphic comprises determining an orientation of the device based on a feedback image (i.e., the device determines whether the device is oriented in a correct way or not by capturing the image of its reflection; paragraphs [0024]-[0026]; Figures 2A-B); and providing guidance to adjust the orientation of the electronic device based on the determined orientation (i.e., Nguyen 685 discloses “instructing the user on correct positioning” and that the user can be instructed to manually take photos “when the electronic device is correctly positioned”; paragraph [0025]).
Regarding claims 3 and 5, it would have been obvious to a person of ordinary skill in the art to determine an orientation of the electronic device based on the first image of the at least the portion of the first graphic (instead of merely any image) in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi in order to efficiently orient the device for the capture of the second image while confirming the identity of the device. In other words, Nguyen 685 already generally discloses guiding the user to adjust the orientation based on some initial image; and Bowles 611 teaches particularly capturing a first image of a first graphic (i.e., an identification code) in order advantageously identify the device. One of ordinary skill in the art would have been motivated to combine these teachings such that the capturing of the identification graphic provides the orientation feedback for further capturing the second image in order to improve efficiency.
Regarding claim 4 and further regarding claim 5, the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi includes capturing an image of at least a portion of the first graphic (e.g., identification information) as taught by Bowles 611 and capturing an image of at least a portion of the second graphic (e.g., a grid) as disclosed by Nguyen 685, wherein these images are reflections captured by the camera of the electronic device as disclosed by Nguyen 685 (see above discussions with regard to parent claim 1). Nguyen 685 in view of Bowles 611, Sobczak, and Onishi does not specifically teach capturing an additional image of the first or second graphics, but Nguyen 685 teaches capturing “a photo or photos” of the electronic device generally (Nguyen 685, paragraphs [0024]-[0025]). Regarding claims 4 and 5, it would have been obvious to a person of ordinary skill in the art to capture additional images of the first and second graphics in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi in order to advantageously more accurately analyze the identity and/or condition of the device. Such additional images would merely yield a predictable result of providing further information for analysis, particularly since Nguyen 685 already generally teaches capturing multiple images.
Regarding claim 6, in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi, Nguyen 685 further discloses determining that at least one of the captured first image or one or more of the captured second images is not a processable image (i.e., Nguyen 685 discloses that an app on the device takes photos “when the electronic device is correctly positioned” in front of the mirror, which inherently includes initially capturing images of an incorrectly positioned device and determining that the images are not processable; paragraph [0024]-[0025]); and allowing the first device to be reoriented to capture a processable image by:
determining an orientation of the first device based on the captured first image or one or more of the captured second images (i.e., the device determines whether the device is oriented in a correct way or not by capturing the image of its reflection; paragraphs [0024]-[0026]; Figures 2A-B); and
providing guidance to adjust an orientation of the first device based on the determined orientation (i.e., Nguyen 685 discloses “instructing the user on correct positioning” and that the user can be instructed to manually take photos “when the electronic device is correctly positioned”; paragraph [0025]).
Regarding claim 7, in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi (including a captured first image with identification information as taught by Bowles 611), Nguyen 685 further discloses tagging the captured second image with identification information (see Nguyen 685, paragraph [0029]). In the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi, this tagging is tagging “with at least a portion of the captured first image,” since the identification information comes from the first image as taught by Bowles 611.
Regarding claim 8, in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi, Nguyen 685 discloses restricting one or more processes of the device at least in the sense that Nguyen 685 discloses disabling the usual display of the front camera view on the device screen (Nguyen 685, paragraph [0026]).
Regarding claim 9, in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi, Nguyen 685 discloses identifying the screen or portion thereof of the electronic device in the second image as discussed above with regard to claim 1 but does not specifically disclose corner or edge detection. However, again, Sobczak teaches a method that is related to the method disclosed by Nguyen 685, including identifying a condition of an object (e.g., a belt) based on capturing and analyzing an image of that object (Sobczak, Abstract and paragraph [0002]; Figures 4 and 6). Sobczak further teaches using edge detection to identify the object within the image (Sobczak, Figure 7; paragraphs [0086]-[0094]). It would have been obvious to a person of ordinary skill in the art to use edge detection as taught by Sobczak in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi in order to effectively identify the device screen within the image to perform analysis of its condition.
Regarding claim 10, Nguyen 685 in view of Bowles 611, Sobczak, and Onishi teach a method as discussed above with regard to claim 1, including generating a third/cropped image as taught by Sobczak. Sobczak further teaches that generating the cropped image comprises altering the initial image such that all portions of the initial image that are not identified as the desired object are removed (Sobczak, paragraph [0078]). It would have been obvious to a person of ordinary skill in the art to generate a third (i.e., cropped) image by altering the second image such that portions of the second image that are not identified as the screen or the active portion thereof are removed as further taught by Sobczak in the method taught by in view of Bowles 611, Sobczak, and Onishi in order to advantageously remove extraneous parts of the image and further facilitate the analysis of the device screen within the image.
Regarding claim 11, in the method taught by Nguyen 685 in view of Bowles 611, Sobczak, and Onishi, Nguyen 685 discloses that identifying the screen or portion thereof comprises identifying the active area of the screen of the electronic device (e.g., the screen could display solid white to highlight the screen in the image of the device and “The further analysis step changes the visual parameters of the photo until the glowing device screen is visually distinct from the rest of the device”; paragraphs [0027] and [0034]).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 20-22, 24, and 42 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of U.S. Patent No. 10,810,732 B2. Although the claims at issue are not identical, they are not patentably distinct from each other. More specifically, reissue claims 20-22, 24, and 53 essentially recite a subset of the limitations recited in claim 9 of US 10,810,732 B2 (which includes all of the limitations of parent claim 1 of US 10,810,732 B2), including causing display of a first graphic comprising an identification code; capturing a first image comprising a reflection of the first graphic via a camera on the first side of the electronic device; causing display of one or more second graphics; capturing a second image comprising a reflection of at least a portion of a second graphic via a camera on the first side of the electronic device; generating a third image corresponding to the identified screen or portion thereof by restricting one or more portions of the second image that are not identified as the screen or portion thereof from inclusion in the third image; and processing the second image to determine a condition of at least a portion of the screen of the electronic device. Given claim 9 of US 10,810,732 B2, it would have been obvious to create reissue claims 20-22, 24, and 42 by simply omitting limitations (and further in the case of reissue claim 42, providing instructions executed by a processor to perform the recited steps).
Claims 20, 22, 24, 25, and 36 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 20, 23, 24, 26, 29 of US RE50,340 E, respectively, in view of Sobczak.
Reissue claim 20 recites a method to identify a condition of a screen of an electronic device with similar steps as claim 20 of US RE50,340 E, including causing display of one or more first graphics on a screen of an electronic device, wherein the screen is on a first side of the electronic device; capturing a first image of at least a portion of a first graphic of the one or more first graphics via a camera on the first side of the electronic device; and processing the first image to determine a condition of at least a portion of the screen of the electronic device. Reissue claim 20 differs from claim 20 of US RE50,340 E in that claim 20 of US RE50,340 E further recites a set of burst images, which would have been obvious to simply omit to create reissue claim 20. Reissue claim 20 also differs from claim 20 of US RE50,340 E in that reissue claim 20 further recites generating a second image corresponding to the identified screen or the active portion thereof by restricting one or more portions of the first image that are not identified as the screen or the active portion thereof from inclusion in the second image, which is not recited in claim 20 of US RE50,340 E. However, Sobczak teaches a method including identifying a condition of an object (e.g., a belt) based on capturing and analyzing an image of that object (Sobczak, Abstract and paragraph [0002]; Figures 4 and 6). Sobczak further teaches generating a cropped image corresponding to the identified belt by restricting one or more portions of the initial image that are not identified as the belt from inclusion in the cropped image (Sobczak, paragraph [0078]). Given claim 20 of US RE50,340 E, it would have been obvious to a person of ordinary skill in the art to create reissue claim 20 by further generating a second (i.e., cropped) image corresponding to the identified screen or the active portion thereof by restricting one or more portions of the first image that are not identified as the screen or the active portion thereof from inclusion, as taught by Sobczak, in order to advantageously remove extraneous parts of the image and further facilitate the analysis of the device screen within the image.
Reissue claims 22, 24, 25, and 36 depend on claim 20 and further recite limitations that correspond to the limitations recited in claims 23, 24, 26, 29 of US RE50,340 E, respectively. Given claims 23, 24, 26, 29 of US RE50,340 E in view of Sobczak, it also would have been obvious to create reissue claims 22, 24, 25, and 36 for the same reason as reissue claim 20.
Conclusion
Applicant is reminded of the continuing obligation under 37 CFR 1.178(b), to timely apprise the Office of any prior or concurrent proceeding in which this reissue application is or was involved. These proceedings would include interferences, reissues, reexaminations, and litigation. Applicant is further reminded of the continuing obligation under 37 CFR 1.56, to timely apprise the Office of any information which is material to patentability of the claims under consideration in this reissue application. These obligations rest with each individual associated with the filing and prosecution of this application for reissue. See also MPEP §§ 1404, 1442.01 and 1442.04.
Applicant is notified that any subsequent amendment to the specification and/or claims must comply with 37 CFR 1.173(b).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/patents/laws/interview-practice.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Any inquiry concerning this communication or earlier communications from the examiner, or as to the status of this proceeding, should be directed to Examiner Christina Leung at telephone number (571) 272-3023; the Examiner’s supervisor, SPE Patricia Engle at (571) 272-6660; or the Central Reexamination Unit at (571) 272-7705.
/CHRISTINA Y. LEUNG/Primary Examiner, Art Unit 3991
Conferees:
/DEANDRA M HUGHES/Reexamination Specialist, Art Unit 3992
/Patricia L Engle/SPRS, Art Unit 3991