Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-10 are rejected under 35 U.S.C. 103 as being unpatentable over Watanabe et al (JP2020135450) in view of Da Silva et al (US20170322119).
Regarding Claim 1. Watanabe teaches A data display system comprising:
at least one memory storing computer-executable instructions; and
at least one processor configured to access the at least one memory and
execute the computer-executable instructions to (Watanabe, abstract, the invention describes method and a program for efficiently inspecting a structure such as a bridge easily, quickly, accurately and at low cost. The present invention includes a photographing device 10 including a camera, at least three laser pointers, and a fixing means for integrally holding them, and a data processing device 20, from the camera to the structure 2. Calculates distance data and tilt data in the three-dimensional direction of the camera, combines multiple partial image data based on the magnification of partial image data and detailed image data in two wide and narrow image data sets, and integrates them to scale the scale. A bridge that can obtain partially detailed wide area front image data of structure 2 by combining and integrating a plurality of coarse wide area front image data of structure 2 and specific partial detailed image data. An image processing system 1, an image processing method, and a program for efficiently inspecting a structure 2 such as the above.
Page 3, par 7, (5) In the image processing system for efficiently inspecting a structure such as a bridge of the present invention, the data processing device is composed of a host computer and a local terminal device connected to the host computer.
It is considered inherited that a computer comprises processor and memory storage.):
estimate a photographing position pose of image data in a coordinate system of three-dimensional data (Watanabe, page 3, par 5, The distance data from the camera to the structure and the tilt data in the three-dimensional direction of the camera are calculated from at least three irradiation points by the laser pointer in the detailed image data and the portion in each of the two wide and narrow image data sets. By combining and integrating a plurality of atrial image data based on the magnification of the image data and the detailed image datail, it is possible to obtain coarse wide area front image data of a structure having a unified scale scale.
Page 4, par 3, (1) Since the present invention configured as described above uses an imaging device in which the laser beam from each laser pointer and the optical axis of the camera are fixed, the coordinate positions of each irradiation point by the laser pointer and the imaging device from the relative relationship with the coordinate position of each irradiation point by the laser pointer in the image data taken with the positional relationship with the target specified in advance using, the distance data from the camera to the structure and the camera 3 Tilt data in the dimensional direction can be calculated.);
detect a position of a predetermined matter from the image data (Watanabe, page 8, par 4, Further, in a preferred embodiment of the present invention, the data processing apparatus 20 has damages such as cracks 5 or peeling occurring in the structure 2 from a wide range front image of the structure 2 obtained by the image integration process. It is possible to perform the measurement process of. In such a measurement process, the coordinate position of damage (distance between irradiation points P1 to P4) in each image data is determined from the distance data between the camera 11 and the object, the inclination data of the camera 11, and the resolution of the image data.); and
Watanabe fails to explicitly teach, however, Da Silva teaches display a position of the matter on diagram data in a superposed manner, based on the photographing position pose and the position of the predetermined matter (Da Silva, abstract, the invention describes an inspection system for assessing and visualizing structural damage to a structural platform comprises sensors operatively coupled to the structural platform that assess structural damage to or failure of the structural platform. A structural health monitoring processor operatively coupled to the sensors determines structural damage in response to the sensors. At least one RF transponder and associated reader determines the position of the structural platform relative to a user augmented reality viewing device. The user augmented reality viewing device includes a camera that captures images of the structural platform. The user augmented reality viewing device displays real world images of the structural platform and virtual indications of determined structural damage that are dependent on the position and orientation of the user augmented reality viewing device relative to the structural platform.
[0053] In FIG. 1, a user viewing an AR system (which in this case is a handheld tablet, smartphone or other viewing device including a camera, RFID or other transponder, and gyrosensor) can see both the "real view" of a structure and, superimposed on it, a virtual structural failure to provide an augmented reality view. The user may thus scan the structure to inspect it and look for defects. Upon the system detecting (using the camera, gyro sensor and/or transponder) from the position and orientation of the AR viewing system in 3D space that the user is looking at a portion of the structure the SHM system has determined is damaged, the system may superimpose a virtual indication of structural damage or failure at an appropriate position on a display of the AR system. Colorcoding, size, symbology and the like may be used to indicate to the user the type of damage, the severity of the damage, whether the damage is on the surface or deep within the structure, etc.).
Watanabe and Da Silva are analogous art because they both teach method of detecting damage on building structure by image analysis. Da Silva further teaches superimposing damage data on the image of the building structure. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the building damage detection method (taught in Watanabe), to further superimposing damage data on the image of the building structure (taught in Da Silva), so as to provide an intuitive method for user to view the structural damage (Da Silva, [0004]).
Regarding Claim 2. The combination of Watanabe and Da Silva further teaches The data display system according to claim 1, wherein the at least one processor is further configured to execute the instructions to display the image data and the three-dimensional data in a superposed manner, based on the photographing position pose (Da Silva, [0053] In FIG. 1, a user viewing an AR system (which in this case is a handheld tablet, smartphone or other viewing device including a camera, RFID or other transponder, and gyrosensor) can see both the "real view" of a structure and, superimposed on it, a virtual structural failure to provide an augmented reality view. The user may thus scan the structure to inspect it and look for defects. Upon the system detecting (using the camera, gyro sensor and/or transponder) from the position and orientation of the AR viewing system in 3D space that the user is looking at a portion of the structure the SHM system has determined is damaged, the system may superimpose a virtual indication of structural damage or failure at an appropriate position on a display of the AR system. Colorcoding, size, symbology and the like may be used to indicate to the user the type of damage, the severity of the damage, whether the damage is on the surface or deep within the structure, etc.
Further see Fig 3D.).
The reasoning for combination of Watanabe and Da Silva is the same as described in Claim 1.
Regarding Claim 3. The combination of Watanabe and Da Silva further teaches The data display system according to claim 2, wherein a photographing range of the image data on a target object indicated by the three-dimensional data is presented by displaying the image data and the three-dimensional data in a superposed manner (Da Silva, [0077] If there is structural damage, the SHM system performs other algorithms that calculate the location of this damage in relation to an origin point previously defined (e.g., Cartesian coordinates: x, y and z equals zero) and determine an estimate of the size, severity or other characteristics of this structural damage.
[0078] The results are sent from the application server for the AR System and can be transmitted or by wire or by wireless signal performed through Bluetooth, a Wi-Fi network or other options (see FIG. 2D).
[0092] Once the transponder system identifies both the AR device with the operator and a nearby SHM sensors network, it will initiate an automatic integrity evaluation of the correspondent structure as well as indicate to the AR System the user and SHM locations. If there is any evidence of damage or degradation in the structure, the results of the SHM system will allow the AR device to generate a virtual representation of the damage and its severity (see FIG. 3D).).
The reasoning for combination of Watanabe and Da Silva is the same as described in Claim 1.
Regarding Claim 4. The combination of Watanabe and Da Silva further teaches The data display system according to claim 2, wherein a display method is changed from a state of displaying only the three-dimensional data to a state of displaying the image data and the three-dimensional data in a superposed manner (Da Silva, [0053] In FIG. 1, a user viewing an AR system (which in this case is a handheld tablet, smartphone or other viewing device including a camera, RFID or other transponder, and gyrosensor) can see both the "real view" of a structure and, superimposed on it, a virtual structural failure to provide an augmented reality view. The user may thus scan the structure to inspect it and look for defects. Upon the system detecting (using the camera, gyro sensor and/or transponder) from the position and orientation of the AR viewing system in 3D space that the user is looking at a portion of the structure the SHM system has determined is damaged, the system may superimpose a virtual indication of structural damage or failure at an appropriate position on a display of the AR system. Colorcoding, size, symbology and the like may be used to indicate to the user the type of damage, the severity of the damage, whether the damage is on the surface or deep within the structure, etc.
Therefore, when the system detects the user is looking at a portion of the structure with damage, it switches from only displaying the structure to display the structure with superimposed damage data.).
The reasoning for combination of Watanabe and Da Silva is the same as described in Claim 1.
Regarding Claim 6. The combination of combination of Watanabe and Da Silva further teaches The data display system according to claim 2, wherein the image data and the three-dimensional data are displayed in a superposed manner by coloring a part of the three-dimensional data, being included in the photographing range of the image data, based on the image data (Da Silva, [0053] In FIG. 1, a user viewing an AR system (which in this case is a handheld tablet, smartphone or other viewing device including a camera, RFID or other transponder, and gyrosensor) can see both the "real view" of a structure and, superimposed on it, a virtual structural failure to provide an augmented reality view. The user may thus scan the structure to inspect it and look for defects. Upon the system detecting (using the camera, gyro sensor and/or transponder) from the position and orientation of the AR viewing system in 3D space that the user is looking at a portion of the structure the SHM system has determined is damaged, the system may superimpose a virtual indication of structural damage or failure at an appropriate position on a display of the AR system. Colorcoding, size, symbology and the like may be used to indicate to the user the type of damage, the severity of the damage, whether the damage is on the surface or deep within the structure, etc.).
The reasoning for combination of Watanabe and Da Silva is the same as described in Claim 1.
Regarding Claim 7. The combination of combination of Watanabe and Da Silva further teaches The data display system according to claim 2, wherein the image data and the three-dimensional data are displayed in a superposed manner by generating a member image acquired by drawing the three-dimensional data from a predetermined viewpoint and deforming and superposing the image data on the member image (Da Silva, [0053] In FIG. 1, a user viewing an AR system (which in this case is a handheld tablet, smartphone or other viewing device including a camera, RFID or other transponder, and gyrosensor) can see both the "real view" of a structure and, superimposed on it, a virtual structural failure to provide an augmented reality view. The user may thus scan the structure to inspect it and look for defects. Upon the system detecting (using the camera, gyro sensor and/or transponder) from the position and orientation of the AR viewing system in 3D space that the user is looking at a portion of the structure the SHM system has determined is damaged, the system may superimpose a virtual indication of structural damage or failure at an appropriate position on a display of the AR system. Colorcoding, size, symbology and the like may be used to indicate to the user the type of damage, the severity of the damage, whether the damage is on the surface or deep within the structure, etc.
A superimposed label can be displayed in a perspective view, which is commonly used to create the illusion of depth and three-dimensionality. A perspective view is to display item in a predetermined angle with a deformed shape. To display a superimposed image data in a perspective view is to provide user with more realistic view when inspecting building damage.).
The reasoning for combination of Watanabe and Da Silva is the same as described in Claim 1.
Claim 8 is similar in scope as Claim 1, and thus is rejected under same rationale.
Claim 9 is similar in scope as Claim 1, and thus is rejected under same rationale.
Regarding Claim 10. The combination of Watanabe and Da Silva further teaches A non-transitory computer-readable storage medium storing a program for causing a computer to execute the computer-implemented data display method according to claim 9 (Watanabe, Page 3, par 7, (5) In the image processing system for efficiently inspecting a structure such as a bridge of the present invention, the data processing device is composed of a host computer and a local terminal device connected to the host computer.
It is considered inherited that a computer comprises processor and memory storage.).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Watanabe et al (JP2020135450) in view of Da Silva et al (US20170322119) further in view of Kaneko (CN114175019).
Regarding Claim 5. The combination of combination of Watanabe and Da Silva fails to explicitly teach, however, Kaneko teaches The data display system according to claim 2, wherein the image data and the three-dimensional data are displayed in a superposed manner by arranging the image data as a three-dimensional object on the coordinate system and simultaneously displaying the three-dimensional object being arranged and the three-dimensional data (Kaneko, abstract, the invention describes device, method, and program capable of displaying an optimal captured image among a plurality of captured images including pixels of a desired three-dimensional point of a subject. The desired position of the subject can be easily specified by displaying the three-dimensional model of the subject on the display unit and performing view operations such as magnification of the three-dimensional model. When the desired position of the subject is specified, the three-dimensional position on the three-dimensional model corresponding to the position is determined, and a group of captured images obtained by photographing the subject is searched for a plurality of pixels containing the pixel corresponding to the determined three-dimensional position. photographic images. The optimal captured image is determined from the plurality of captured images that have been searched, or the priority order of the plurality of captured captured images is determined, and the determined optimal captured image is displayed on the display unit, or the plurality of captured images are displayed in the determined priority order. Part or all of the captured image is displayed on the display unit.
Page 10, par 6, Fig. 9 is a diagram showing an example of an orthographic image in which a damage map corresponding to a bank is superimposed.
Page 14, par 11, When the extensive three-dimensional model 16B is displayed, a high-luminance point, a blinking of a high-luminance point, or the like is preferable as a mark indicating the position of the photographed image 100 superimposed and displayed on the three- dimensional model 16B.).
Watanabe, Da Silva and Kaneko are analogous art because they all teach method of visualizing damage on building structure by image analysis. Kaneko further teaches superimposing damage data on the image of the 3D building structure object. Therefore, it would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify the building damage detection method (taught in Watanabe and Da Silva), to further superimposing damage data on the image of the 3D building structure object (taught in Kaneko), so as to provide an intuitive method for user to selectively bring up image based on a position designated by a user (Kaneko, page 5, par 8-11).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN SHENG whose telephone number is (571)272-5734. The examiner can normally be reached M-F 9:30AM-3:30PM 6:00PM-8:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 5712723022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Xin Sheng/ Primary Examiner, Art Unit 2619