DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of claims: claims 1-20 are examined below.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claims 1-20 are provisionally rejected under 35 U.S.C. 101 as claiming the same invention as that of claims 1-20 of copending Application No. 18/623890 (reference application). This is a provisional statutory double patenting rejection since the claims directed to the same invention have not in fact been patented.
Claims 1-20 of the instant application are identical in scope to claims 1-20 of the copending application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 16-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Weiss et al (US 2017/0206428).
Claim 1:
Weiss et al (US 2017/0206428) teaches the following subject matter:
A method for automatically measuring features across multiple assembly units (0014 teaches automatically multiple optical inspection stations along an assembly line after the assembly line and optical inspection stations) comprising:
accessing a dimension library containing a set of feature templates associated with geometric characteristics of predefined features in recorded inspection images of assembly units (0040 teaches database (library) with reference image (predefined feature), remote or local, map to digital photographic image related to real dimension (i.e, length, distance… values in real space));
accessing a first inspection image of a first assembly unit (0040 teaches first digital photographic image generated from first image);
prior to presentation (0052 teaches prior to presentation of the first image to user) of the first inspection image to a user (0040 teaches first digital photographic image generated from first image display to the first image within user interface or presentation):
predicting a first set of key features in the first inspection image based on the set of feature templates contained in the dimension library (0116 teach system can predict length-type measurement and total area measurement, distance, angle, gap profile…of particular feature); and
extracting a first set of real dimensions of the first set of key features from the first inspection image (0040 teaches system can also extract a real dimension of a feature of the first assembly unit from the first image);
presenting the first inspection image to the user via a user portal (0040 teaches first digital photographic image generated from first image display to the first image within user interface or presentation);
projecting the first set of real dimensions proximal the first set of key features onto the first inspection image at the user portal (0050 teaches project the first and second of two images in the view window within the user interface enable user to quickly, visually distinguish difference);
receiving of a first subset of key features, in the first set of key features, from the user at the user portal (0040 teaches user interface, where 0050 further detail enable user to quickly, visually distinguish difference inputs, where 0027, 0136, 0146 teaches user selection with subset measurement specification (key features));
accessing a second inspection image of a second assembly unit (figure 1 and 0013 teaches second image from second assembly; figure 3 and 0032-0033);
prior to presentation of the second inspection image to the user (0052 and 0077 detail prior to serving image to user interface for viewing processing of view area of one image to another (second inspection image)):
predicting a second set of key features in the second inspection image based on the set of feature templates contained in the dimension library and the first subset of key features selected by the user, the second set of key features comprising (paragraph 0127 teach predict between second features to first features that are prompt the user – through user interface):
the first subset of key features by the user (0040 teaches user interface, where 0050 further detail enable user to quickly, visually distinguish difference inputs); and
a second subset of key features distinct from unconfirmed features in the first set of key features (0112 teaches system can implement similar methods and techniques to receive selections of multiple distinct features (corner, edge, surface) from the first image); and
extracting a second set of real dimensions of the second set of key features from the second inspection image (0102, 0121, and 0129 extract real dimension from second image of second assembly to the user interface);
presenting the second inspection image to the user via the user portal (0102 teaches extract real dimension of second image and second assembly to user interface Block S320; 0121 and 0129 teaches the same with further details); and
projecting the second set of real dimensions proximal the second set of key features onto the second inspection image at the user portal (0102, 0121, 0129 and claim 3 teaches extracting a real dimension of a feature in an assembly unit from an image for each image in the set of images comprises: projecting the dimension space onto the second image; and extracting a second real dimension of the second feature from the second image).
Weiss et al teaches all the subject above.
Weiss et al teaches in 0040 and 0050 with images presented to user for selection, but not the following:
receiving confirmation; confirmed by user.
Roberts et al (US 2022/0241069) teaches the following subject matter:
receiving confirmation; confirmed by user (0096 teaches confirm with operator-assisted (user) for sequence of assembly during inspection across different assemblies).
Weiss et al and Roberts et al are both in the field of image analysis, especially during inspection of assembly using database (library) for reference such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Weiss et al by Roberts et al where operator confirmation for assembly to preform correctly as disclosed by Roberts et al in 0096.
Claim 16:
Weiss et al (US 2017/0206428) teaches the following subject matter:
A method for automatically measuring features across multiple assembly units comprising (0014 teaches automatically multiple optical inspection stations along an assembly line after the assembly line and optical inspection stations):
accessing a dimension library containing a set of feature templates associated with geometric characteristics of predefined features in recorded inspection images of assembly units (0040 teaches database (library) with reference image (predefined feature), remote or local, map to digital photographic image related to real dimension (i.e, length, distance… values in real space));
during a first time period: accessing a first inspection image of a first assembly unit (0040 teaches first digital photographic image generated from first image);
predicting a first set of key features in the first inspection image based on the set of feature templates contained in the dimension library (0116 teach system can predict length-type measurement and total area measurement, distance, angle, gap profile…of particular feature);
extracting a first set of real dimensions of the first set of key features from the first inspection image (0040 teaches system can also extract a real dimension of a feature of the first assembly unit from the first image);
presenting the first inspection image to a user via a user portal (0040 teaches first digital photographic image generated from first image display to the first image within user interface or presentation);
projecting the first set of real dimensions proximal the first set of key features onto the first inspection image at the user portal (0050 teaches project the first and second of two images in the view window within the user interface enable user to quickly, visually distinguish difference); and
receiving of a first subset of key features, in the first set of key features, from the user at the user portal (0040 teaches user interface, where 0050 further detail enable user to quickly, visually distinguish difference inputs, where 0027, 0136, 0146 teaches user selection with subset measurement specification (key features)); and
during a second time period following the first time period: accessing a second inspection image of a second assembly unit (figure 1 and 0013 teaches second image from second assembly; figure 3 and 0032-0033);
based on receipt of of the first subset of key features from the user, identifying the first subset of key features in the second inspection image (0052 and 0077 detail prior to serving image to user interface for viewing processing of view area of one image to another (second inspection image));
predicting a second set of key features in the second inspection image based on the set of feature templates contained in the dimension library (paragraph 0127 teach predict between second features to first features that are prompt the user – through user interface);
extracting a second set of real dimensions of the first subset of key features and the second set of key features from the second inspection image (0102, 0121, and 0129 extract real dimension from second image of second assembly to the user interface);
presenting the second inspection image at the user portal (0102 teaches extract real dimension of second image and second assembly to user interface Block S320; 0121 and 0129 teaches the same with further details); and
projecting the second set of real dimensions proximal the second set of key features onto the second inspection image at the user portal (0102, 0121, 0129 and claim 3 teaches extracting a real dimension of a feature in an assembly unit from an image for each image in the set of images comprises: projecting the dimension space onto the second image; and extracting a second real dimension of the second feature from the second image).
Weiss et al teaches all the subject above.
Weiss et al teaches in 0040 and 0050, above, with images presented to user, but not the following:
receiving confirmation; confirmed by user.
Roberts et al (US 2022/0241069) teaches the following subject matter:
receiving confirmation; confirmed by user (0096 teaches confirm with operator-assisted (user) for sequence of assembly during inspection across different assemblies).
Weiss et al and Roberts et al are both in the field of image analysis, especially during inspection of assembly using database (library) for reference such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Weiss et al by Roberts et al where operator confirmation for assembly to preform correctly as disclosed by Roberts et al in 0096.
Claim 17:
Weiss et al teaches:
The method of Claim 16, wherein predicting the second set of key features based on the set of feature templates contained in the dimension library comprises predicting the second set of key features distinct from unconfirmed features in the first set of key features (paragraph 0127 teach predict between second features to first features that are prompt the user – through user interface).
Claim 20:
Weiss et al (US 2017/0206428) teaches the following subject matter:
A method for automatically measuring features across multiple assembly units comprising (0014 teaches automatically multiple optical inspection stations along an assembly line after the assembly line and optical inspection stations):
accessing a dimension library containing a set of feature templates associated with geometric characteristics of predefined features in recorded inspection images of assembly units (0040 teaches database (library) with reference image (predefined feature), remote or local, map to digital photographic image related to real dimension (i.e, length, distance… values in real space));
accessing a first inspection image of a first assembly unit (0040 teaches first digital photographic image generated from first image);
prior to presentation of the first inspection image to a user (0052 teaches prior to presentation of the first image to user) of the first inspection image to a user (0040 teaches first digital photographic image generated from first image display to the first image within user interface or presentation):
predicting a first set of key features in the first inspection image based on the set of feature templates contained in the dimension library (0116 teach system can predict length-type measurement and total area measurement, distance, angle, gap profile…of particular feature); and
extracting a first set of real dimensions of the first set of key features from the first inspection image (0040 teaches system can also extract a real dimension of a feature of the first assembly unit from the first image);
presenting the first inspection image to a user via a user portal (0040 teaches first digital photographic image generated from first image display to the first image within user interface or presentation);
projecting the first set of real dimensions proximal the first set of key features onto the first inspection image at the user portal (0050 teaches project the first and second of two images in the view window within the user interface enable user to quickly, visually distinguish difference);
receiving of a first subset of key features, in the first set of key features, from the user at the user portal (0040 teaches user interface, where 0050 further detail enable user to quickly, visually distinguish difference inputs, where 0027, 0136, 0146 teaches user selection with subset measurement specification (key features)); and
in response to receipt of confirmation of the first subset of key features from the user: predicting a second set of key features, distinct from unconfirmed features in the first set of key features, in the first inspection image based on the set of feature templates contained in the dimension library (paragraph 0127 teach predict between second features to first features that are prompt the user – through user interface);
extracting a second set of real dimensions of the first subset of key features and the second set of key features from the first inspection image (0102, 0121, and 0129 extract real dimension from second image of second assembly to the user interface); and
projecting the second set of real dimensions proximal the second set of key features onto the first inspection image at the user portal (0102, 0121, 0129 and claim 3 teaches extracting a real dimension of a feature in an assembly unit from an image for each image in the set of images comprises: projecting the dimension space onto the second image; and extracting a second real dimension of the second feature from the second image).
Weiss et al teaches all the subject above.
Weiss et al teaches in 0040 and 0050, above, with images presented to user, but not the following:
receiving confirmation; confirmed by user.
Roberts et al (US 2022/0241069) teaches the following subject matter:
receiving confirmation; confirmed by user (0096 teaches confirm with operator-assisted (user) for sequence of assembly during inspection across different assemblies).
Weiss et al and Roberts et al are both in the field of image analysis, especially during inspection of assembly using database (library) for reference such that the combine outcome is predictable.
Therefore it would have been obvious to one having ordinary skill before the effective filing date to modify Weiss et al by Roberts et al where operator confirmation for assembly to preform correctly as disclosed by Roberts et al in 0096.
Allowable Subject Matter
Claims 2-15 and 18-19 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims when overcome the statutory Double Patent rejection above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Weiss et al (US 2017/0206643) teaches METHODS FOR AUTOMATICALLY GENERATING A COMMON MEASUREMENT ACROSS MULTIPLE ASSEMBLY UNITS -Abstract teach first image of a first assembly unit within a user interface; locating a first virtual origin at a first feature on the first assembly unit; displaying a first subregion of the first image within the user interface responsive to a change in a view window of the first image; recording a geometry and a position of the first subregion relative to the first virtual origin; locating a second virtual origin at a second feature—analogous to the first feature—on a second assembly unit represented in the second image; projecting the geometry and the position of the first subregion onto the second image according to the second virtual origin to define a second subregion of the second image; and, in response to receipt of a command to advance from the first image to the second image, displaying the second subregion within the user interface.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TSUNG-YIN TSAI whose telephone number is (571)270-1671. The examiner can normally be reached 7am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh Mehta can be reached at (571) 272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TSUNG YIN TSAI/Primary Examiner, Art Unit 2656