DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/26/2026 has been entered.
Information Disclosure Statement
3. The information disclosure statements (IDS) submitted on the following dates are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner: 02/02/2026.
Response to Amendment
4. Applicant’s amendments filed on 02/26/2025 have been entered. Claims 16-18 have been added. Claims 1-18 are pending in this application, with claims 1 and 10-15 being independent.
Claim Rejections - 35 USC § 103
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claims 1-2, 6 and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Liao et al., (“Liao”) [US-2022/0044352-A1] in view of Urushiya (“Urushiya”) [US-2007/0195091-A1]
Regarding claim 1, Liao discloses an image processing method (Liao- ¶0003, at least discloses a first learning network for geometric deformation from a first image domain to a second image domain is determined based on a first image in the first image domain and a second image in the second image domain, images in the first and second image domains having different styles and objects in the images having geometric deformation with respect to each other.), comprising:
acquiring a second image obtained by applying a geometric transformation to a first image (Liao- Fig. 2B and ¶0062, at least disclose after the first learning network 210 performs the geometric deformation on the source image 102 to deform the first geometry 202 of the object in the source image 102 [applying a geometric transformation to a first image] to the second geometry 204, the second geometry 204 and the source image 102 are input together to the merging module 240. The merging module 240 transforms the source image 102 based on the second geometry 204 to generate an intermediate image 242 [acquiring a second image obtained]. The merging module 240 performs image warping on the source image 102 under the guidance of the deformed second geometry 204, such that the object in the generated intermediate image 104 has a same or similar geometry as the second geometry 202. Since the warping is performed directly on the source image 102 [first image], the intermediate image 242 [second image] maintains the same first style of the source image 102 (e.g., the real photo style)), wherein in the geometric transformation, point in the second image is mapped from point in the first image (Liao- ¶0065, at least discloses using the landmark points to represent the geometry, it is assumed that LX and LY are the domains of landmark points in the photo domain (X) and the caricature domain (Y), respectively. The first learning network 210 is to be trained to learn the mapping Φgeo: LX→LY for geometric deformation, such that deformed landmark points ly∈LY in the domain Y are generated for the landmark point lx of the photo x in the domain X; ¶0068, at least discloses In the landmark point-based geometry representation, the landmark points are marked on the first image and the second image both for training. Therefore, the landmark points may be extracted from these images for training. In order to collect the geometry of all possible objects, a similar translation may be utilized to align the first image and the second image for training to an average shape of the objects through several landmark points (e.g., three landmark points on the human face, including centers of both eyes and a center of the mouth); ¶0120, at least discloses extracting first landmark points of a geometry of an object in the first image and second landmark points of a geometry of an object in the second image; determining a first principal component analysis (PCA) representation of the first landmark points and a second PCA representation of the second landmark points; ¶0136, at least discloses performing the geometric deformation comprises: determining landmark points in the source image that represent the first geometry; generating a principal component analysis (PCA) representation of the landmark points […] and determining deformed landmark points representing the second geometry based on the deformed PCA representation);
acquiring information about a deformation amount of the first image in the
geometric transformation (Liao- ¶0052, at least discloses the first learning network 210 may also perform the geometric deformation based on a degree of deformation indicated by the user to deform the first geometry of the object in the source image 102 to the second geometry. The degree of deformation may be indicated by the user […] through a user adjustable parameter may be set to indicate the degree of deformation [information about the deformation amount]. The second learning network 210 may determine a deformation of the second geometry relative to the first geometry based on the degree of deformation. For example, if the first learning network 210 is to magnify or diminish a part of the first geometry, the degree of deformation may control the extent to which the part is magnified or diminished; ¶0137, at least discloses obtaining an indication of a deformation degree of the object; and transferring the first geometry to the second geometry based on the deformation degree); and
generating a third image based on the second image and the information about the deformation amount (Liao- ¶0052, at least discloses the first learning network 210 may also perform the geometric deformation based on a degree of deformation information about the deformation amount [information about the deformation amount] indicated by the user to deform the first geometry of the object in the source image 102 to the second geometry. The degree of deformation may be indicated by the user […] through a user adjustable parameter may be set to indicate the degree of deformation. The second learning network 210 may determine a deformation of the second geometry relative to the first geometry based on the degree of deformation [information about the deformation amount]; Fig. 2B and ¶0062-0063, at least disclose after the first learning network 210 performs the geometric deformation on the source image 102 to deform the first geometry 202 of the object in the source image 102 to the second geometry 204, the second geometry 204 and the source image 102 are input together to the merging module 240. The merging module 240 transforms the source image 102 based on the second geometry 204 to generate an intermediate image 242 […] The intermediate image 242 [the second image] is input to the second learning network 220 to perform the style transfer to generate the target image 104 [generating a third image]).
Liao does not explicitly disclose wherein in the geometric transformation, each point in the second image is mapped from respective point in the first image.
However, Urushiya discloses
in the geometric transformation, each point in the second image is mapped from respective point in the first image (Urushiya- ¶0017, at least discloses a body movement correction unit adapted to execute a correction of a body movement by executing geometric transformation to the plural projected images of which the projected angles of the radiation are different, by using the respective changed geometric transformation parameters; Fig. 4 shows each point in the second image is mapped from respective point in the first image; ¶0046-0048, at least disclose the coordinates of the corresponding points (the small black square points shown in FIG. 4) between a projected image 401 (for example, an image at scan angle 0°) and a projected image 402 (for example, an image at scan angle 360°) of which the respective projected angles overlap each other are acquired […] the sets of the coordinates of the respective corresponding points of the projected images 401 and 402 are acquired as much as the number of corresponding points. To achieve this, first, plural fixed points are set on one (e.g., projected image 401) of the two projected images […] if the plural fixed points are set with respect to one of the two projected images, the coordinates, on the other (e.g., projected image 402) of the two projected images, of the points respectively corresponding to these fixed points are acquired; when the coordinates of the corresponding points between the projected images 401 and 402 of which the projected angles overlap each other are acquired in the step S301, the geometric transformation parameter is acquired from the set of the coordinates of the corresponding points (step S302). For example, affine transformation may be used in such geometric transformation; ¶0055, at least discloses when the coordinates of the corresponding points between the projected images 401 and 402 of which the projected angles overlap each other are acquired in the step S301, the geometric transformation parameter is acquired from the set of the coordinates of the corresponding points (step S302). For example, affine transformation may be used in such geometric transformation).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao to incorporate the teachings of Urushiya, and apply the corresponding points between the projected images into Liao’s teachings for acquiring a second image obtained by applying a geometric transformation to a first image, wherein in the geometric transformation, each point in the second image is mapped from respective point in the first image.
Doing so would high-accurately correct a body movement as much as possible for reducing an artifact which appears on a tomographic image.
Regarding claim 2, Liao in view of Urushiya, discloses the image processing method according to claim 1, and further discloses wherein the third image is generated by inputting the second image and the information about the deformation amount (see Claim 1 rejection for detailed analysis) to a machine learning model (Liao- Fig. 2B and ¶0062-0063, at least disclose after the first learning network 210 performs the geometric deformation on the source image 102 to deform the first geometry 202 of the object in the source image 102 to the second geometry 204, the second geometry 204 and the source image 102 are input together to the merging module 240 […] The intermediate image 242 is input to the second learning network 220 to perform the style transfer to generate the target image 104).
Regarding claim 6, Liao in view of Urushiya, discloses the image processing method according to claim 1, and further discloses wherein the information about the deformation amount (see Claim 1 rejection for detailed analysis) includes a value of the deformation amount at each position of a pixel in the first image (Liao- ¶0052, at least discloses the first learning network 210 may also perform the geometric deformation based on a degree of deformation indicated by the user to deform the first geometry of the object in the source image 102 to the second geometry. The degree of deformation may be indicated by the user […] through a user adjustable parameter may be set to indicate the degree of deformation [information about the deformation amount]. The second learning network 210 may determine a deformation of the second geometry relative to the first geometry based on the degree of deformation; Urushiya- Fig. 8 and ¶0085, at least disclose the point (X, y) on the before-geometric transformation image 801 to be transformed into the point (X′, y′) on the after-geometric transformation image 802 is acquired. Here, it should be noted that the point (X′, y′) indicates the coordinates of the integer value corresponding to each pixel on the after-geometric transformation image 802. Then, by using the inverse transformation of the geometric transformation with respect to the coordinates, the point (X, y) corresponding to the point (X′, y′) can be acquired. However, since the point (X, y) does not have the coordinates of an integer value, interpolation is executed by using the pixel values of the four points closest to the point (X, y), and the interpolated value is set as the pixel value of the point (X′, y′). This process is executed to all the pixels, whereby the image can be created through the geometric transformation).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao to incorporate the teachings of Urushiya, and apply the position of a pixel into Liao’s teachings in order the information about the deformation amount includes a value of the deformation amount at each position of a pixel in the first image.
The same motivation that was utilized in the rejection of claim 1 applies equally to this claim.
Regarding claim 10, all claim limitations are set forth as claim 1 in a non-transitory computer-readable storage medium that stores computer-executable instructions and rejected as per discussion for claim 1.
Liao in view of Urushiya further discloses a non-transitory computer-readable storage medium that stores computer-executable instructions that, when executed by a computer (Liao- Fig. 1 and ¶0026, at least disclose The computing device 100 typically includes various computer storage medium. Such medium can be any medium accessible by the computing device 100, including, but not limited to, volatile and non-volatile medium, or detachable and non-detachable medium. The memory 120 can be a volatile memory (for example, a register, cache, Random Access Memory (RAM)), a non-volatile memory (such as a Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), or a flash memory), or any combination thereof. The storage device 130 may be any detachable or non-detachable medium and may include a machine-readable medium such as a memory, flash memory drive, magnetic disk or another other media, which can be used for storing information and/or data and can be accessed in the computing device 100; ¶0031, at least discloses The memory 120 may include one or more image translation modules 122 having one or more program instructions. These modules are accessible and executable by the processing unit 110 to perform the functionalities of the various implementations), cause the computer to perform the method of claim 1.
The image processing apparatus of claim 11 is similar in scope to the functions performed by the method of claim 1 and therefore claim 11 is rejected under the same rationale.
Liao in view of Urushiya further discloses an image processing apparatus (Liao- Fig. 1 and ¶0023-0024, at least disclose the computing device 100 includes a general-purpose computing device 100 […] the computing device 100 may be implemented as any user terminal or server terminal having the computing capability), comprising:
one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130); and
one or more processors, wherein the one or more processors and the one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130; ¶0025, at least discloses The processing unit 110 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 120) are configured to perform the method of claim 1.
7. Claims 12-16 are rejected under 35 U.S.C. 103 as being unpatentable over Liao et al., (“Liao”) [US-2022/0044352-A1] in view of Urushiya (“Urushiya”) [US-2007/0195091-A1], further in view of Hiasa, (“Hiasa”) [US-2020/0285883-A1]
Regarding claim 12, Liao discloses an image processing system (Liao- Fig. 1 and ¶0023-0024, at least disclose the computing device 100 includes a general-purpose computing device 100 […] the computing device 100 may be implemented as any user terminal or server terminal having the computing capability), comprising:
an image processing apparatus (Liao- Fig. 1 and ¶0023-0024, at least disclose the computing device 100 includes a general-purpose computing device 100 […] the computing device 100 may be implemented as any user terminal or server terminal having the computing capability; ¶0031, at least discloses the computing device is also referred to as a “image processing device 100”); and
a control apparatus configured to communicate with the image processing apparatus (Liao- Fig. 1 and ¶0028, at least disclose The communication unit 140 communicates with a further computing device via the communication medium. In addition, the functions of the components in the computing device 100 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 100 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes),
wherein the image processing apparatus (As discussed above) includes
one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130); and
one or more processors, wherein the one or more processors and the one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130; ¶0025, at least discloses The processing unit 110 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 120) are configured to:
acquire a second image obtained by applying a geometric transformation to a first image (see Claim 1 rejection for detailed analysis), wherein in the geometric transformation, point in the second image is mapped from point in the first image (see Claim 1 rejection for detailed analysis);
acquire information about a deformation amount of the first image in the geometric transformation (see Claim 1 rejection for detailed analysis);
generate a third image based on the second image and the information about the deformation amount (see Claim 1 rejection for detailed analysis); and
perform processing on the first image (Liao- ¶0032, at least disclose When performing the image translation, the image processing device 100 can receive a source image 102 through an input device 150), and
wherein the control apparatus (Liao- Fig. 1 and ¶0028, at least disclose The communication unit 140 communicates with a further computing device via the communication medium. In addition, the functions of the components in the computing device 100 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections) includes
one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130); and
one or more processors, wherein the one or more processors and the one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130; ¶0025, at least discloses The processing unit 110 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 120) are configured to:
causing the image processing apparatus to perform processing on the first image obtained by imaging using an imaging device (Liao- ¶0032, at least discloses When performing the image translation, the image processing device 100 can receive a source image 102 through an input device 150; ¶0047, at least discloses The image translation module 122 further includes a geometry detector 230 for detecting the geometry of the object in the source image 102 (i.e., the first geometry) for processing by the first learning network 210).
Liao does not explicitly disclose wherein in the geometric transformation, each point in the second image is mapped from respective point in the first image; perform processing on the first image in response to a request, and transmit the request for causing the image processing apparatus to perform processing on the first image obtained by imaging using an optical system and an imaging device.
However, Urushiya discloses
in the geometric transformation, each point in the second image is mapped from respective point in the first image (Urushiya- ¶0017, at least discloses a body movement correction unit adapted to execute a correction of a body movement by executing geometric transformation to the plural projected images of which the projected angles of the radiation are different, by using the respective changed geometric transformation parameters; Fig. 4 shows each point in the second image is mapped from respective point in the first image; ¶0046-0048, at least disclose the coordinates of the corresponding points (the small black square points shown in FIG. 4) between a projected image 401 (for example, an image at scan angle 0°) and a projected image 402 (for example, an image at scan angle 360°) of which the respective projected angles overlap each other are acquired […] the sets of the coordinates of the respective corresponding points of the projected images 401 and 402 are acquired as much as the number of corresponding points. To achieve this, first, plural fixed points are set on one (e.g., projected image 401) of the two projected images […] if the plural fixed points are set with respect to one of the two projected images, the coordinates, on the other (e.g., projected image 402) of the two projected images, of the points respectively corresponding to these fixed points are acquired; when the coordinates of the corresponding points between the projected images 401 and 402 of which the projected angles overlap each other are acquired in the step S301, the geometric transformation parameter is acquired from the set of the coordinates of the corresponding points (step S302). For example, affine transformation may be used in such geometric transformation; ¶0055, at least discloses when the coordinates of the corresponding points between the projected images 401 and 402 of which the projected angles overlap each other are acquired in the step S301, the geometric transformation parameter is acquired from the set of the coordinates of the corresponding points (step S302). For example, affine transformation may be used in such geometric transformation).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao to incorporate the teachings of Urushiya, and apply the corresponding points between the projected images into Liao’s teachings in order to acquire a second image obtained by applying a geometric transformation to a first image, wherein in the geometric transformation, each point in the second image is mapped from respective point in the first image.
Doing so would high-accurately correct a body movement as much as possible for reducing an artifact which appears on a tomographic image.
The prior art does not clearly disclose, but Hiasa discloses
perform processing on the first image in response to a request (Hiasa- Fig. 12 and ¶0078, at least disclose The communicator 604 a has a function of transmitting a request to the image estimating apparatus 603 for making the image estimating apparatus 603 process a captured image, and a function of receiving an output image processed by the image estimating apparatus 603; ¶0082, at least discloses the image estimating apparatus 603 receives the request to process the captured image from the computer 604);
transmit the request for causing the image processing apparatus to perform processing on the first image obtained by imaging using an optical system and an imaging device (Hiasa- Fig. 12 and ¶0078, at least disclose The communicator 604 a has a function of transmitting a request to the image estimating apparatus 603 for making the image estimating apparatus 603 process a captured image, and a function of receiving an output image processed by the image estimating apparatus 603; Fig. 2 and ¶0032, at least disclose the obtainer 101 b obtains a ground truth patch (first around truth image) and a training patch (first training image) […] The training patch that has the same captured object as that of the ground truth patch, is a low-resolution (low-quality) image with a large amount of blur caused by the aberration and the diffraction of the optical system 102 a; ¶0035, at least discloses The low-resolution captured image is generated by reducing as the same manner as that of the high-resolution captured image, by applying a blur caused by the aberration and the diffraction of the optical system 102 a, and by clipping the image at the brightness saturation value; Fig. 6 and ¶0062, at least disclose The imaging apparatus 302 obtains a captured image by imaging an object space […] The imaging apparatus 302 includes an optical system 321 and an image sensor 322. The contrast of the object in the captured image obtained by the image sensor 322 has decreased caused by the haze existing in the object space).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Hiasa, and apply the multilayer neural network into Liao/Urushiya’s teachings in order to perform processing on the first image in response to a request; transmit the request for causing the image processing apparatus to perform processing on the first image obtained by imaging using an optical system and an imaging device.
Doing so would provide an image processing system which suppresses an image noise variation associated with image processing.
Regarding claim 13, Liao discloses a method of generating a machine learning model (Liao- ¶0149, at least discloses a computer-implemented method, comprising: determining a first learning network for geometric deformation from a first image domain to a second image domain based on a first image in the first image domain and a second image in the second image domain, images in the first and second image domains having different styles and objects in the images having geometric deformation with respect to each other), the method comprising:
acquiring a first training image obtained by imaging using an imaging device (Liao- ¶0032, at least discloses When performing the image translation, the image processing device 100 can receive a source image 102 through an input device 150; ¶0047, at least discloses The image translation module 122 further includes a geometry detector 230 for detecting the geometry of the object in the source image 102 (i.e., the first geometry) for processing by the first learning network 210);
generating a second training image by applying a geometric transformation to the first training image (Liao- Fig. 2B and ¶0062, at least disclose after the first learning network 210 performs the geometric deformation on the source image 102 to deform the first geometry 202 of the object in the source image 102 [applying a geometric transformation to a first image] to the second geometry 204, the second geometry 204 and the source image 102 are input together to the merging module 240. The merging module 240 transforms the source image 102 based on the second geometry 204 to generate an intermediate image 242 [acquiring a second image obtained]. The merging module 240 performs image warping on the source image 102 under the guidance of the deformed second geometry 204, such that the object in the generated intermediate image 104 has a same or similar geometry as the second geometry 202. Since the warping is performed directly on the source image 102 [first image], the intermediate image 242 [second image] maintains the same first style of the source image 102 (e.g., the real photo style)), wherein in the geometric transformation, point in the second training image is mapped from point in the first training image (Liao- ¶0065, at least discloses using the landmark points to represent the geometry, it is assumed that LX and LY are the domains of landmark points in the photo domain (X) and the caricature domain (Y), respectively. The first learning network 210 is to be trained to learn the mapping Φgeo: LX→LY for geometric deformation, such that deformed landmark points ly∈LY in the domain Y are generated for the landmark point lx of the photo x in the domain X; ¶0068, at least discloses In the landmark point-based geometry representation, the landmark points are marked on the first image and the second image both for training. Therefore, the landmark points may be extracted from these images for training. In order to collect the geometry of all possible objects, a similar translation may be utilized to align the first image and the second image for training to an average shape of the objects through several landmark points (e.g., three landmark points on the human face, including centers of both eyes and a center of the mouth); ¶0120, at least discloses extracting first landmark points of a geometry of an object in the first image and second landmark points of a geometry of an object in the second image; determining a first principal component analysis (PCA) representation of the first landmark points and a second PCA representation of the second landmark points; ¶0136, at least discloses performing the geometric deformation comprises: determining landmark points in the source image that represent the first geometry; generating a principal component analysis (PCA) representation of the landmark points […] and determining deformed landmark points representing the second geometry based on the deformed PCA representation);
acquiring information about a deformation amount of the first training image in the geometric transformation (Liao- ¶0052, at least discloses the first learning network 210 may also perform the geometric deformation based on a degree of deformation indicated by the user to deform the first geometry of the object in the source image 102 to the second geometry. The degree of deformation may be indicated by the user […] through a user adjustable parameter may be set to indicate the degree of deformation [information about the deformation amount]. The second learning network 210 may determine a deformation of the second geometry relative to the first geometry based on the degree of deformation. For example, if the first learning network 210 is to magnify or diminish a part of the first geometry, the degree of deformation may control the extent to which the part is magnified or diminished; ¶0137, at least discloses obtaining an indication of a deformation degree of the object; and transferring the first geometry to the second geometry based on the deformation degree);
generating an estimated image by inputting the second training image and the information about the deformation amount to a machine learning model (Liao- ¶0052, at least discloses the first learning network 210 may also perform the geometric deformation based on a degree of deformation information about the deformation amount [information about the deformation amount] indicated by the user to deform the first geometry of the object in the source image 102 to the second geometry. The degree of deformation may be indicated by the user […] through a user adjustable parameter may be set to indicate the degree of deformation. The second learning network 210 may determine a deformation of the second geometry relative to the first geometry based on the degree of deformation [information about the deformation amount]; Fig. 2B and ¶0062-0063, at least disclose after the first learning network 210 performs the geometric deformation on the source image 102 to deform the first geometry 202 of the object in the source image 102 to the second geometry 204, the second geometry 204 and the source image 102 are input together to the merging module 240. The merging module 240 transforms the source image 102 based on the second geometry 204 to generate an intermediate image 242 […] The intermediate image 242 [the second image] is input to the second learning network 220 to perform the style transfer to generate the target image 104 [generating an estimated image])
.
Liao does not explicitly disclose acquiring a first training image obtained by imaging using an optical system and an imaging device, information about the optical system, and a ground truth image; generating a second training image by applying a geometric transformation to the first training image based on the information about the optical system; wherein in the geometric transformation, each point in the second training image is mapped from respective point in the first training image; and updating a weight of the machine learning model based on the ground truth image and the estimated image.
However, Urushiya discloses
in the geometric transformation, each point in the second training image is mapped from respective point in the first training image (Urushiya- ¶0017, at least discloses a body movement correction unit adapted to execute a correction of a body movement by executing geometric transformation to the plural projected images of which the projected angles of the radiation are different, by using the respective changed geometric transformation parameters; Fig. 4 shows each point in the second image is mapped from respective point in the first image; ¶0046-0048, at least disclose the coordinates of the corresponding points (the small black square points shown in FIG. 4) between a projected image 401 (for example, an image at scan angle 0°) and a projected image 402 (for example, an image at scan angle 360°) of which the respective projected angles overlap each other are acquired […] the sets of the coordinates of the respective corresponding points of the projected images 401 and 402 are acquired as much as the number of corresponding points. To achieve this, first, plural fixed points are set on one (e.g., projected image 401) of the two projected images […] if the plural fixed points are set with respect to one of the two projected images, the coordinates, on the other (e.g., projected image 402) of the two projected images, of the points respectively corresponding to these fixed points are acquired; when the coordinates of the corresponding points between the projected images 401 and 402 of which the projected angles overlap each other are acquired in the step S301, the geometric transformation parameter is acquired from the set of the coordinates of the corresponding points (step S302). For example, affine transformation may be used in such geometric transformation; ¶0055, at least discloses when the coordinates of the corresponding points between the projected images 401 and 402 of which the projected angles overlap each other are acquired in the step S301, the geometric transformation parameter is acquired from the set of the coordinates of the corresponding points (step S302). For example, affine transformation may be used in such geometric transformation).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao to incorporate the teachings of Urushiya, and apply the corresponding points between the projected images into Liao’s teachings for acquiring a second image obtained by applying a geometric transformation to a first image, wherein in the geometric transformation, each point in the second image is mapped from respective point in the first image.
Doing so would high-accurately correct a body movement as much as possible for reducing an artifact which appears on a tomographic image.
The prior art does not clearly disclose, but Hiasa discloses
acquiring a first training image obtained by imaging using an optical system and an imaging device, information about the optical system, and a ground truth image (Hiasa- ¶0005, at least discloses An image processing method […] includes a first step configured to obtain a first ground truth image and a first training image [acquiring a first training image] […] by applying mutually correlated noises to the first ground truth image and the first training image; Fig. 4 and ¶0032, at least disclose in the step S101 in FIG. 4, the obtainer 101 b obtains a ground truth patch (first ground truth image) [a ground truth image] and a training patch (first training image) [first training image] […] the ground truth patch is a high-resolution (high-quality) image with a small amount of blur caused by the aberration and the diffraction of the optical system 102 b. The training patch that has the same captured object as that of the ground truth patch, is a low-resolution (low-quality) image with a large amount of blur caused by the aberration and the diffraction of the optical system 102 b. That is, the ground truth patch is an image having a relatively small amount of blur, and the training patch is an image having a relatively large amount of blur; Fig. 2 and ¶0028, at least disclose The imaging apparatus 102 includes an optical system 102 a and an image sensor 102 b. The optical system 102 a condenses light entering the imaging apparatus 102 from an object space. The image sensor 102 b receives (photoelectrically converts) an optical image (object image) formed via the optical system 102 a, and obtains a captured image. The image sensor 102 b is, for example, a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal-Oxide Semiconductor) sensor. The captured image obtained by the imaging apparatus 102 includes a blur caused by an aberration and a diffraction of the optical system 102 a and a noise caused by the image sensor 102 b [information about the optical system]);
generating a second training image by applying a geometric transformation to the first training image based on the information about the optical system (Hiasa- ¶0024, at least discloses Weight learning where the weight (such as filters and biases) is to be used in the multilayer neural network, applies mutually correlated noises [applying a geometric transformation] to a first ground truth image and a first training image, and generates a second ground truth image and a second training image [generating a second training image] […] when image processing to be executed is resolution enhancing, the first training image is a low-resolution image, and the first ground truth image is a high-resolution image; ¶0028, at least discloses The optical system 102 a condenses light entering the imaging apparatus 102 from an object space. The image sensor 102 b receives (photoelectrically converts) an optical image (object image) formed via the optical system 102 a, and obtains a captured image […] The captured image obtained by the imaging apparatus 102 includes a blur caused by an aberration and a diffraction of the optical system 102 a and a noise caused by the image sensor 102 b [the information about the optical system]; Fig. 4 and ¶0032-0033, at least discloses in the step S101 in FIG. 4, the obtainer 101 b obtains a ground truth patch (first ground truth image) and a training patch (first training image) […] the ground truth patch is a high-resolution (high-quality) image with a small amount of blur caused by the aberration and the diffraction of the optical system 102 b. The training patch that has the same captured object as that of the ground truth patch, is a low-resolution (low-quality) image with a large amount of blur caused by the aberration and the diffraction of the optical system 102 b. That is, the ground truth patch is an image having a relatively small amount of blur, and the training patch is an image having a relatively large amount of blur […] The step S101 obtains a plurality of sets of the ground truth patch and the training patch; ¶0039, at least discloses In the step S103, the generator 101 c generates a noise ground truth patch (second ground truth image) and a noise training patch (second training image). FIG. 1 illustrates the flow from the steps S103 to S105. The generator 101 c applies noises based on a random number sequence 203 to a ground truth patch 201 and a training patch 202, and generates a noise ground truth patch 211 and a noise training patch 212; ¶0086, at least discloses The obtainer obtains a first ground truth image and a first training image in a first step. The generator generates a second ground truth image and a second training image by applying mutually correlated noises to the first ground truth image and the first training image, in a second step); and
updating a weight of the machine learning model based on the ground truth image and the estimated image (Hiasa- Figs. 4, 8 show a flowchart relating to weight learning; Figs. 1, 4 and ¶0031, at least disclose a description will be given of a weight (weight information) learning method (a learnt model manufacturing method) executed by the learning apparatus 101 in this embodiment. FIG. 1 illustrates a flow of the weight learning of a neural network. FIG. 4 is a flowchart relating to the weight learning. Mainly, the obtainer 101 b, the generator 101 c, or the updater 101 d of the learning apparatus 101 executes each step in FIG. 4; Fig. 4 and ¶0054, at least disclose in the step S105, the updater 101 d updates the weight (weight information) [ updating a weight] for the neural network [machine learning model] based on an error between the estimated patch 213 and the noise ground truth patch (second ground truth image) 211. The weight includes a filter component and a bias of each layer).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Hiasa, and apply the optical system and the weight of the machine learning model based on the ground truth image into Liao/Urushiya’s teachings for acquiring a first training image obtained by imaging using an optical system and an imaging device, information about the optical system, and a ground truth image; generating a second training image by applying a geometric transformation to the first training image based on the information about the optical system; and updating a weight of the machine learning model based on the ground truth image and the estimated image.
The same motivation that was utilized in the rejection of claim 12 applies equally to this claim.
Regarding claim 14, Liao in view of Urushiya and Hiasa, discloses a learning apparatus (Liao- Fig. 1 and ¶0023, at least disclose the computing device 100; Figs. 3, 4A, 4B show training architecture for training a first learning network and a second learning network), comprising:
one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130); and
one or more processors, wherein the one or more processors and the one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130; ¶0025, at least discloses The processing unit 110 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 120) are configured to perform the method of claim 13.
Regarding claim 15, Liao discloses an image processing system (Liao- Fig. 1 and ¶0023-0024, at least disclose the computing device 100 includes a general-purpose computing device 100 […] the computing device 100 may be implemented as any user terminal or server terminal having the computing capability), comprising:
a learning apparatus (Liao- Fig. 1 and ¶0023, at least disclose the computing device 100; Figs. 3, 4A, 4B show training architecture for training a first learning network and a second learning network); and
an imaging apparatus configured to communicate with the learning apparatus , (Liao- Fig. 1 and ¶0028, at least disclose The communication unit 140 communicates with a further computing device via the communication medium. In addition, the functions of the components in the computing device 100 can be implemented by a single computing cluster or multiple computing machines that can communicate via communication connections. Therefore, the computing device 100 can operate in a networked environment using a logical connection with one or more other servers, networked personal computers (PCs) or further general network nodes) wherein the learning apparatus includes
one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130); and
one or more processors, wherein the one or more processors and the one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130; ¶0025, at least discloses The processing unit 110 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 120) are configured to:
acquire a first training image obtained by imaging using an optical system and an imaging device, information about the optical system, and a ground truth image (see Claim 13 rejection for detailed analysis);
generate a second training image by applying a first geometric transformation to the first training image based on the information about the optical system (see Claim 13 rejection for detailed analysis), wherein in the first geometric transformation, each point in the second training image is mapped from respective point in the first training image (see Claim 13 rejection for detailed analysis);
acquire information about a deformation amount of the first training image in the geometric transformation (see Claim 13 rejection for detailed analysis);
generate an estimated image by inputting the second training image and the information about the deformation amount to a machine learning model (see Claim 13 rejection for detailed analysis); and
update a weight of the machine learning model based on the ground truth image and the estimated image (see Claim 13 rejection for detailed analysis), and
wherein the imaging apparatus (As discussed above) includes
one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130), and
one or more processors, wherein the one or more processors and the one or more memories (Liao- Fig. 1 and ¶0023, at least disclose Components of the computing device 100 may include, but are not limited to, one or more processors or processing units 110, a memory 120, a storage device 130; ¶0025, at least discloses The processing unit 110 may be a physical or virtual processor and can implement various processes based on programs stored in the memory 120) are configured to
acquire a first image (Liao- ¶0032, at least discloses When performing the image translation, the image processing device 100 can receive a source image 102 through an input device 150),
generate a second image by applying a second geometric transformation to the first image (Liao- Fig. 2B and ¶0062, at least disclose after the first learning network 210 performs the geometric deformation on the source image 102 to deform the first geometry 202 of the object in the source image 102 [applying a geometric transformation to a first image] to the second geometry 204, the second geometry 204 and the source image 102 are input together to the merging module 240. The merging module 240 transforms the source image 102 based on the second geometry 204 to generate an intermediate image 242 [acquiring a second image obtained]. The merging module 240 performs image warping on the source image 102 under the guidance of the deformed second geometry 204, such that the object in the generated intermediate image 104 has a same or similar geometry as the second geometry 202. Since the warping is performed directly on the source image 102 [first image], the intermediate image 242 [second image] maintains the same first style of the source image 102 (e.g., the real photo style)), wherein in the second geometric transformation, point in the second image is mapped from point in the first image (Liao- ¶0065, at least discloses using the landmark points to represent the geometry, it is assumed that LX and LY are the domains of landmark points in the photo domain (X) and the caricature domain (Y), respectively. The first learning network 210 is to be trained to learn the mapping Φgeo: LX→LY for geometric deformation, such that deformed landmark points ly∈LY in the domain Y are generated for the landmark point lx of the photo x in the domain X; ¶0068, at least discloses In the landmark point-based geometry representation, the landmark points are marked on the first image and the second image both for training. Therefore, the landmark points may be extracted from these images for training. In order to collect the geometry of all possible objects, a similar translation may be utilized to align the first image and the second image for training to an average shape of the objects through several landmark points (e.g., three landmark points on the human face, including centers of both eyes and a center of the mouth); ¶0120, at least discloses extracting first landmark points of a geometry of an object in the first image and second landmark points of a geometry of an object in the second image; determining a first principal component analysis (PCA) representation of the first landmark points and a second PCA representation of the second landmark points; ¶0136, at least discloses performing the geometric deformation comprises: determining landmark points in the source image that represent the first geometry; generating a principal component analysis (PCA) representation of the landmark points […] and determining deformed landmark points representing the second geometry based on the deformed PCA representation),
acquire information about a second deformation amount of the first image in the geometric transformation of the first image (Liao- ¶0052, at least discloses the first learning network 210 may also perform the geometric deformation based on a degree of deformation indicated by the user to deform the first geometry of the object in the source image 102 to the second geometry. The degree of deformation may be indicated by the user […] through a user adjustable parameter may be set to indicate the degree of deformation [in information about a second deformation amount]. The second learning network 210 may determine a deformation of the second geometry relative to the first geometry based on the degree of deformation. For example, if the first learning network 210 is to magnify or diminish a part of the first geometry, the degree of deformation may control the extent to which the part is magnified or diminished; ¶0137, at least discloses obtaining an indication of a deformation degree of the object; and transferring the first geometry to the second geometry based on the deformation degree), and
generate a third image by inputting the second image and the information about the second deformation amount to the machine learning model (Liao- ¶0052, at least discloses the first learning network 210 may also perform the geometric deformation based on a degree of deformation information about the deformation amount [information about the deformation amount] indicated by the user to deform the first geometry of the object in the source image 102 to the second geometry. The degree of deformation may be indicated by the user […] through a user adjustable parameter may be set to indicate the degree of deformation. The second learning network 210 may determine a deformation of the second geometry relative to the first geometry based on the degree of deformation [information about the deformation amount]; Fig. 2B and ¶0062-0063, at least disclose after the first learning network 210 performs the geometric deformation on the source image 102 to deform the first geometry 202 of the object in the source image 102 to the second geometry 204, the second geometry 204 and the source image 102 are input together to the merging module 240. The merging module 240 transforms the source image 102 based on the second geometry 204 to generate an intermediate image 242 […] The intermediate image 242 [the second image] is input to the second learning network 220 to perform the style transfer to generate the target image 104 [generating a third image]).
Liao does not explicitly disclose the optical system, the imaging device, acquire a first image acquired using the optical system and the imaging device, and information about the optical system, generate a second image by applying a second geometric transformation to the first image based on the information about the optical system, wherein in the second geometric transformation, each point in the second image is mapped from respective point in the first image.
However, Urushiya discloses
in the second geometric transformation, each point in the second image is mapped from respective point in the first image (Urushiya- ¶0017, at least discloses a body movement correction unit adapted to execute a correction of a body movement by executing geometric transformation to the plural projected images of which the projected angles of the radiation are different, by using the respective changed geometric transformation parameters; Fig. 4 shows each point in the second image is mapped from respective point in the first image; ¶0046-0048, at least disclose the coordinates of the corresponding points (the small black square points shown in FIG. 4) between a projected image 401 (for example, an image at scan angle 0°) and a projected image 402 (for example, an image at scan angle 360°) of which the respective projected angles overlap each other are acquired […] the sets of the coordinates of the respective corresponding points of the projected images 401 and 402 are acquired as much as the number of corresponding points. To achieve this, first, plural fixed points are set on one (e.g., projected image 401) of the two projected images […] if the plural fixed points are set with respect to one of the two projected images, the coordinates, on the other (e.g., projected image 402) of the two projected images, of the points respectively corresponding to these fixed points are acquired; when the coordinates of the corresponding points between the projected images 401 and 402 of which the projected angles overlap each other are acquired in the step S301, the geometric transformation parameter is acquired from the set of the coordinates of the corresponding points (step S302). For example, affine transformation may be used in such geometric transformation; ¶0055, at least discloses when the coordinates of the corresponding points between the projected images 401 and 402 of which the projected angles overlap each other are acquired in the step S301, the geometric transformation parameter is acquired from the set of the coordinates of the corresponding points (step S302). For example, affine transformation may be used in such geometric transformation).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao to incorporate the teachings of Urushiya, and apply the corresponding points between the projected images into Liao’s teachings in order to generate a second image by applying a second geometric transformation to the first image based on the information about the optical system.
Doing so would high-accurately correct a body movement as much as possible for reducing an artifact which appears on a tomographic image.
The prior art does not clearly disclose, but Hiasa discloses
the optical system (Hiasa- Fig. 6 and ¶0062, at least disclose The imaging apparatus 302 includes an optical system 321 and an image sensor 322),
the imaging device (Hiasa- Fig. 6 and ¶0062, at least disclose The imaging apparatus 302 obtains a captured image by imaging an object space […] The imaging apparatus 302 includes an optical system 321 and an image sensor 322. The contrast of the object in the captured image obtained by the image sensor 322 has decreased caused by the haze existing in the object space.),
acquire a first image acquired using the optical system and the imaging device, and information about the optical system (Hiasa- ¶0005, at least discloses An image processing method […] includes a first step configured to obtain a first ground truth image and a first training image [acquire a first image] […] by applying mutually correlated noises to the first ground truth image and the first training image; Fig. 4 and ¶0032, at least disclose in the step S101 in FIG. 4, the obtainer 101 b obtains a ground truth patch (first ground truth image) [a ground truth image] and a training patch (first training image) [first image]; Fig. 2 and ¶0028, at least disclose The imaging apparatus 102 includes an optical system 102 a and an image sensor 102 b. The optical system 102 a condenses light entering the imaging apparatus 102 from an object space. The image sensor 102 b receives (photoelectrically converts) an optical image (object image) formed via the optical system 102 a, and obtains a captured image. The image sensor 102 b is, for example, a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal-Oxide Semiconductor) sensor. The captured image obtained by the imaging apparatus 102 includes a blur caused by an aberration and a diffraction of the optical system 102 a and a noise caused by the image sensor 102 b [information about the optical system]),
generate a second image by applying a second geometric transformation to the first image based on the information about the optical system (Hiasa- ¶0024, at least discloses Weight learning where the weight (such as filters and biases) is to be used in the multilayer neural network, applies mutually correlated noises [applying a geometric transformation] to a first ground truth image and a first training image, and generates a second ground truth image and a second training image [generating a second training image] […] when image processing to be executed is resolution enhancing, the first training image is a low-resolution image, and the first ground truth image is a high-resolution image; ¶0028, at least discloses The optical system 102 a condenses light entering the imaging apparatus 102 from an object space. The image sensor 102 b receives (photoelectrically converts) an optical image (object image) formed via the optical system 102 a, and obtains a captured image […] The captured image obtained by the imaging apparatus 102 includes a blur caused by an aberration and a diffraction of the optical system 102 a and a noise caused by the image sensor 102 b [the information about the optical system]; Fig. 4 and ¶0032-0033, at least discloses in the step S101 in FIG. 4, the obtainer 101 b obtains a ground truth patch (first ground truth image) and a training patch (first training image) […] the ground truth patch is a high-resolution (high-quality) image with a small amount of blur caused by the aberration and the diffraction of the optical system 102 b. The training patch that has the same captured object as that of the ground truth patch, is a low-resolution (low-quality) image with a large amount of blur caused by the aberration and the diffraction of the optical system 102 b. That is, the ground truth patch is an image having a relatively small amount of blur, and the training patch is an image having a relatively large amount of blur […] The step S101 obtains a plurality of sets of the ground truth patch and the training patch; ¶0039, at least discloses In the step S103, the generator 101 c generates a noise ground truth patch (second ground truth image) and a noise training patch (second training image). FIG. 1 illustrates the flow from the steps S103 to S105. The generator 101 c applies noises based on a random number sequence 203 to a ground truth patch 201 and a training patch 202, and generates a noise ground truth patch 211 and a noise training patch 212; ¶0086, at least discloses The obtainer obtains a first ground truth image and a first training image in a first step. The generator generates a second ground truth image and a second training image by applying mutually correlated noises to the first ground truth image and the first training image, in a second step).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Hiasa, and apply the optical system into Liao/Urushiya’s teachings for acquire a first image acquired using the optical system and the imaging device, and information about the optical system, generate a second image by applying a second geometric transformation to the first image based on the information about the optical system.
The same motivation that was utilized in the rejection of claim 12 applies equally to this claim.
Regarding claim 16, Liao in view of Urushiya, discloses the image processing method according to claim 2, and discloses the method further comprising the generating the third image (see Claim 1 rejection for detailed analysis).
The prior art does not clearly disclose, but Hiasa discloses
acquiring information about a weight of the machine learning model before the generating the third image (Hiasa- ¶0024, at least discloses Weight learning where the weight (such as filters and biases) is to be used in the multilayer neural network, applies mutually correlated noises to a first ground truth image and a first training image, and generates a second ground truth image and a second training image […] when image processing to be executed is resolution enhancing, the first training image is a low-resolution image, and the first ground truth image is a high-resolution image; Figs. 1, 4 and ¶0042, at least disclose in the step S104, the generator 101 c inputs the noise training patch (second training image) 212 into the multilayer neural network [machine learning model], and generates an estimated patch (estimated image) 213 [generate a third image]).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Hiasa, and apply the multilayer neural network into Liao/Urushiya’s teachings for acquiring information about a weight of the machine learning model before the generating the third image.
Doing so would provide an image processing system which suppresses an image noise variation associated with image processing.
8. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Liao in view of Urushiya, further in view of Lee at al. (“Lee”) [US-2014/0375845-A1]
Regarding claim 3, Liao in view of Urushiya, discloses the image processing method according to claim 1, and does not clearly disclose, but Lee discloses wherein the information about the deformation amount includes a ratio of a distance between two points in the first image and a distance between two points in the second image corresponding to the two points in the first image (Lee- ¶0009, at least discloses the distorted image includes a plurality of distortion masks and the generating the lens calibration data includes calculating a coordinate of each of at least two distortion points corresponding to the at least two reference points, respectively, using the distortion masks. The generating the lens calibration data further includes calculating a ratio between a distance between the center of the pattern image and the coordinate of each of the at least two reference points and a distance between a center of the distorted image and a corresponding coordinate of one of the at least two distortion points.).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Lee, and apply the ratio between the distances into Liao/Urushiya’s teachings in order the information about the deformation amount includes a ratio of a distance between two points in the first image and a distance between two points in the second image corresponding to the two points in the first image.
Doing so would provide an image restoration method for increasing the picture quality of images.
9. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Liao in view of Urushiya, further in view of Mori at al. (“Mori”) [US-2017/0061575-A1]
Regarding claim 4, Liao in view of Urushiya, discloses the image processing method according to claim 1, and does not clearly disclose, but Mori discloses wherein the information about the deformation amount includes a ratio of an area of a region in the first image and an area of a region in the second image corresponding to the region in the first image (Mori- Fig. 7 and ¶0079, at least disclose the projection region after deformation 720 is a rectangular region disposed in the projection region before deformation 540 and having a desired aspect ratio on the screen 520. The projection region after deformation 720 is disposed in the projection region before deformation 540 in order to improve convenience in installation of the display apparatus 100; ¶0083, at least discloses the CPU 110 calculates coordinates of intersections of the diagonal lines of the projection region after deformation 720 and the outer periphery (the outline) of the projection region before deformation 540 (S602). As shown in FIG. 7B, the CPU 110 calculates expressions of two straight lines passing through the reference point 710 and having slopes that are identical to the slopes of the diagonal lines of a rectangle having a desired aspect ratio, and calculates coordinates of points Q1 to Q4 where the straight lines cross the outer periphery of the projection region before deformation 540. In FIG. 7B, a setting value of the desired aspect ratio is represented as a:b. Two straight lines respectively having slopes (b/a) and (−b/a) are drawn).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Mori, and apply the aspect ratio before/after deformation into Liao/Urushiya’s teachings in order the information about the deformation amount includes a ratio of an area of a region in the first image and an area of a region in the second image corresponding to the region in the first image.
Doing so an image after deformation can be displayed in a position expected by the user.
10. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Liao in view of Urushiya, further in view of Li (“Li”) [US-2016/0117797-A1]
Regarding claim 5, Liao in view of Urushiya, discloses the image processing method according to claim 1, and does not clearly disclose, but Li discloses wherein the information about the deformation amount includes a moving amount from one point in the first image to one point in the second image corresponding to the one point in the first image (Li- Figs. 15A, 15B show moving amount from one point in the first image to one point in the second image corresponding to the one point in the first image; ¶0099-0100, at least disclose The feature points P2 to P5 correspond to feature points P2′ to P5′, respectively. The corresponding-point position information 15 is obtained from the above-described corresponding-point pairs. The control grid deforming unit 16 deforms the control grid 1201 based on the above-described corresponding-point position information 15. The control grid 1201 in FIG. 15(B) is the control grid obtained after the deformation. The moving image in FIG. 15(B) is the image obtained before the deformation. The registration accuracy can be improved by deforming the moving image by using the deformed control grid 1201 in FIG. 15(B), i.e., the initial values of the control points which have been more appropriately set.).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Li, and apply the control grid into Liao/Urushiya’s teachings in order the information about the deformation amount is two or more types of two-dimensional maps indicating deformation amounts corresponding to directions different from each other in the geometric transformation.
Doing so a registration result can be corrected by deforming the control grid, so that the correction can be facilitated.
11. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Liao in view of Urushiya, further in view of Sasaki (“Sasaki”) [US-2013/0308018-A1]
Regarding claim 7, Liao in view of Urushiya, discloses the image processing method according to claim 1, and does not clearly disclose, but Sasaki discloses wherein the information about the deformation amount is two or more types of two-dimensional maps indicating deformation amounts corresponding to directions different from each other in the geometric transformation (Sasaki- Fig. 5 and ¶0045-0046, at least disclose an image plane divided into a plurality of areas 2. The plurality of areas 2 is set so as to be coarser in the areas near to the center of the image, and finer in the areas away from the center of the image […] The image is divided into a plurality of areas 2 such that the areas are coarser (larger) in the image height including a small distortion amount (e.g., the central or on-axis region of the lens), whereas, the areas become finer (smaller) in the image height including a larger distortion amount (e.g., the peripheral or off-axis region of the lens). Therefore, the division number of the areas maybe changed according to a change in the amount of distortion with respect to the image height […] the image is divided into areas mainly in vertical and horizontal directions in consideration of the speed of digital image processing. However, in a case where the characteristics of the imaging optical system are mainly considered, there are cases where a division in a concentric direction or in a radial direction is desirable).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Sasaki, and apply the vertical and horizontal directions or concentric direction or in a radial direction into Liao/Urushiya’s teachings in order the information about the deformation amount is two or more types of two-dimensional maps indicating deformation amounts corresponding to directions different from each other in the geometric transformation.
Doing so would provide an enhanced technique for processing an image captured via an imaging optical system.
12. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Liao in view of Urushiya, further in view of Kato (“Kato”) [US-2009/0066726-A1]
Regarding claim 8, Liao in view of Urushiya, discloses the image processing method according to claim 1, and does not clearly disclose, but Kato discloses wherein the geometric transformation is transformation varied in the deformation amount depending on a position of a pixel in the first image (Kato- Fig. 4B and ¶0049, at least disclose FIG. 4B is a diagram showing an example of an output image before distortion correction. For example, when the degree of distortion increases with distance from the center of the image in the upper or lower direction, the positions of pixels of the input image and the positions of pixels of the output image that are associated with each other are shifted in the width direction in the distortion correction using geometric transformation, depending on the vertical position of the pixel. Specifically, the pixels on a line segment V1 and a line segment V2 are mapped to the pixels on a line segment W1 and a line segment W2, respectively, that have different lengths. When the amount of distortion per pixel (in the vertical direction in the input image) is .alpha., and the number of pixels between the line segments V1 and V2 is L, the positions of the pixels on the line segment V2 are associated with the positions of the pixels on the line segment W2 so that the line segment W2 is longer than the line segment W1 by L.alpha.. As a result, the correspondences between the coordinates (Xo, Yo) and the coordinates (Xi, Yi) are determined by a given coordinate transformation equation (high degree polynomial)).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Kato, and apply the amount of distortion per pixel in the vertical direction into Liao/Urushiya’s teachings in order the geometric transformation is transformation varied in the deformation amount depending on a position of a pixel in the first image.
Doing so would reduce costs required to realize a process of superimposing an overlay image on an image that is obtained by transforming an input image received from a capturing section.
13. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Liao in view of Urushiya, further in view of Umesawa (“Umesawa”) [US-2019/0325552-A1]
Regarding claim 9, Liao in view of Urushiya, discloses the image processing method according to claim 1, and does not clearly disclose, but Umesawa discloses wherein the geometric transformation is transformation from a first projection method of the first image to a second projection method of the second image (Umesawa- Fig. 8A and ¶0027, at least disclose As shown in FIG. 8A, when an object PQ is projected onto a projective plane L by stereographic projection, the object PQ becomes an image P′Q′″ in the stereographic projection image. However, in the example in FIG. 8A, due to distortion of the lens, the object PQ becomes an image P″Q″ in the actual input image that is obtained by the stereographic projection method; ¶0043-0045, at least disclose the processing performed here is not limited to lens distortion correction, and other types of correction processing may be performed. For example, an input image obtained through a predetermined projection method (e.g., equidistant projection) may be subjected to processing for conversion into a stereographic projection image here. In this case, instead of determining the accuracy of lens distortion correction, the accuracy detection unit 102 can determine the amount of error that remains after this correction processing, and the image conversion unit 103 can use the determined amount of error to determine a set distance r.sub.th).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Umesawa, and apply the processing for conversion from a equidistant projection into a stereographic projection image into Liao/Urushiya’s teachings in order the geometric transformation is transformation from a first projection method of the first image to a second projection method of the second image.
Doing so would provide the set distance is determined based on an accuracy of fisheye lens distortion correction with respect to the fisheye lens.
14. Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Liao in view of Urushiya, further in view of Hatakeyama (“Hatakeyama”) [US-2011/0193997-A1]
Regarding claim 17, Liao in view of Urushiya, discloses the image processing method according to claim 1, and does not clearly disclose, but Hatakeyama discloses wherein image quality of the second image is deteriorated relative to image quality of the first image due to the geometric transformation (Hatakeyama- ¶0006, at least discloses When g(x, y) represents a deteriorated image (input image) containing an image blur component, f(x, y) represents an original non-deteriorated image, h(x, y) represents a point spread function (PSF) which is a Fourier pair of an optical transfer function, * represents convolution, and (x, y) represents coordinates on an image, the following expression is established: g(x,y)=h(x,y)*f(x,y) […] In order to acquire the original image from the deteriorated image, both sides of the expression only need to be divided by H as below: G(u,v)/H(u,v)=F(u,v); ¶0011, at least discloses Deterioration of the image due to the color blur component is substantially corrected by, for example, causing blur amounts of respective color components to be uniform by the image blur component correction; ¶0015, at least discloses In order to obtain a high-quality image by properly correcting an image deteriorated by various aberrations of the optical system, processing for reducing the image blur component and the distortion component needs to be performed).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Hatakeyama, and apply the deteriorated image into Liao/Urushiya’s teachings in order the image quality of the second image is deteriorated relative to image quality of the first image due to the geometric transformation.
Doing so would obtain a high-quality image by properly correcting an image deteriorated by various aberrations of the optical system, obtain a high-quality image by properly correcting an image deteriorated by various aberrations of the optical system.
Regarding claim 18, Liao in view of Urushiya, discloses the image processing method according to claim 1, and does not clearly disclose, but Hatakeyama discloses wherein the third image is an image in which deterioration in the image quality of the second image caused by the geometric transformation is corrected (Hatakeyama- ¶0011, at least discloses Deterioration of the image due to the color blur component is substantially corrected by, for example, causing blur amounts of respective color components to be uniform by the image blur component correction; ¶0015, at least discloses In order to obtain a high-quality image by properly correcting an image deteriorated by various aberrations of the optical system, processing for reducing the image blur component and the distortion component needs to be performed).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Liao/Urushiya to incorporate the teachings of Hatakeyama, and apply obtaining a high-quality image by properly correcting an image deteriorated into Liao/Urushiya’s teachings in order the third image is an image in which deterioration in the image quality of the second image caused by the geometric transformation is corrected.
The same motivation that was utilized in the rejection of claim 17 applies equally to this claim.
Conclusion
15. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form.
16. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL LE/Primary Examiner, Art Unit 2614