DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 15, 2025 has been entered.
Applicant(s) Response to Official Action
The response filed on November 20, 2025 has been entered and made of record. Claims 1 and 7 - 9 have been amended. Accordingly, claims 1 – 9 are currently pending in the application.
Response to Arguments
Applicant’s arguments see pages 6 – 8 with respect to the rejection of Claims 1 and 7 under 35 U.S.C. 103 as being unpatentable over KOBAYASHI TOSHIHIRO, Canon KK (JP 2019-125056 A) in view of Lindner et al., (US 2017/0148168 A1) has been fully considered and is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of the newly discovered reference to wherein the control unit estimates errors, caused by distortion based on the distortion information, in positions of a measurement point in left and right images, as claimed in the amended Claims 1 and 7.
Applicant’s arguments see pages 8 - 11 with respect to the rejection of Claims 8 and 9 under 35 U.S.C. 102(a)(1) as being anticipated by KOBAYASHI TOSHIHIRO, Canon KK (JP 2019-125056 A) has been fully considered and is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of the newly discovered reference to a first optical system comprising a first lens, the first optical system defining a first optical path through the first lens; a second optical system comprising a second lens, the second optical system defining a second optical path through the second lens; a first imaging element positioned along the first optical path, the first imaging element configured to photograph an image of a predetermined space of the first optical path; a second imaging element positioned along the second optical path, the second imaging element configured to photograph an image of a predetermined space of the second optical path, as claimed in the amended Claims 8 and 9.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 - 3 and 6 - 9 is rejected under 35 U.S.C. 103 as being unpatentable over KOBAYASHI TOSHIHIRO, Canon KK (JP 2019-125056 A) (See IDS filed on January 25, 2024) referred to as TOSHIHIRO hereinafter, in view of MYOKAN et al., (US 2018/0350107 A1) referred to as MYOKAN hereinafter.
Regarding Claim 1, TOSHIHIRO teaches an information processing device (Fig. 1, Par. [0009] the information processing apparatus 200) comprising:
a control unit (Fig 2, Par. [0009] the control unit 204),
the control unit acquiring depth information (As illustrated in the flow chart of Fig. 2, control unit 204 acquires information from detection unit 202, Par. [0008] The detection unit detects the position of the object to be held based on the captured image and the three-dimensional coordinates (i.e. depth information). Par. [0017] the imaging unit 101 has a function of outputting an image 154a, an image 154b, a parallax image, a depth map (i.e. depth information) obtained by a stereo method, and three-dimensional coordinates, where the depth map refers to an image that holds a value that is correlated with the distance (depth) to the measurement target for each pixel that constitutes the image 154c, see Par. [0018]) in a predetermined space (Par. [0009] the imaging unit 101 captures an environment (i.e. predetermined space) in which the plurality of gripping objects 301 in the supply tray 304 are loaded) and distortion information of an imaging device (Par. [0021] the imaging unit 101 can read, from the mounted lens, specific information of the lens such as a focal length range, an aperture, a distortion coefficient (i.e. distortion information), and an optical center) that generates the depth information (Par.[0021] The read unique information is used to calculate three-dimensional coordinates (i.e. generate depth information)), and
correcting the depth information based on the distortion information to generate corrected depth information (Par. [0021] The read unique information is used for correction of lens distortion of parallax images and depth maps (i.e. correcting of depth information)), wherein
the control unit estimates errors and the control unit calculates a value based on the estimated errors (Par. [0056]-[0060], three-dimensional information calculation unit 205 calculates three-dimensional information using the parallax image, focal length, and baseline length of the visual information input by the input unit 201. The three-dimensional information includes three dimensional coordinates at each pixel of the parallax image. The three-dimensional information once calculated is corrected (i.e. estimates errors) by referring to the input image).
While TOSHIHIRO teaches the read unique information is used for correction of lens distortion of parallax images and depth maps, TOSHIHIRO does not specifically teach calculating a correction value for depth. Therefore, TOSHIHIRO fails to explicitly teach the control unit estimates errors, caused by distortion based on the distortion information, in positions of a measurement point in left and right images, and the control unit calculates a correction value for a depth value based on the estimated errors.
However, MYOKAN teaches the control unit estimates errors (Fig. 18, control system 2000, Fig. 1 Par. [0066] The estimation section 45 estimates the estimation parameter for use in the estimation formula on the basis of the left and right pair coordinates of the feature points and the estimation formula in such a manner as to make smallest a difference between a measured value of the perpendicular misalignment (i.e. error) that is a difference between the left and right pair coordinates of the feature points in the perpendicular direction and an estimated value (i.e. error) of the perpendicular misalignment calculated by the estimation formula), caused by distortion based on the distortion information in positions of a measurement point in left and right images (Par. [0057], the misalignment between the left image and the right image in a horizontal direction (i.e. position of measurement) and a perpendicular direction due to lens distortions of the left camera 21A and the right camera 21B), and
the control unit calculates a correction value for a depth value based on the estimated errors (Par. [0055], the generation section 33 reads out over-time misalignment parameters (i.e. correction value) for use in a model formula that represents a misalignment (i.e. distortion) between the left image and the right image due to an over-time misalignment of the stereo camera 21 from the over-time misalignment estimation section 13, which includes a pitch angle difference, a yaw angle difference, and a roll angle difference between the left camera 21A (first imaging section) and the right camera 21B (second imaging section), and a scale ratio of the left image to the right image. Par. [0188] When the difference ϕ is generated, the large x misalignment occurs in the entire picture plane and an error (misalignment) occurs in the depth (parallax) (i.e. depth value) generated by the stereo matching).
References TOSHIHIRO and MYOKAN are considered to be analogous art because they relate to imaging systems. Therefore, it would be obvious to one possessing ordinary skill in the art before the effective filing date of the claimed invention to specify correcting depth values as suggest by MYOKAN in the invention of TOSHIHIRO. This modification would allow a correction (make a rectification) to the misalignment between the left image and the right image in a horizontal direction and a perpendicular direction due to lens distortions of the left camera 21A and the right camera 21B, a geometrical position misalignment between the left camera 21A and the right camera 21B, or the like. (See MYOKAN, Par. [0057]).
Regarding Claim 2, TOSHIHIRO in view of MYOKAN teaches claim 1. TOSHIHIRO further teaches wherein the distortion information is information based on at least one of characteristics of an optical system of the imaging device and errors in the placement of the imaging device (Par. [0021] the imaging unit 101 can read, from the mounted lens (i.e. placement of imaging device), specific information of the lens such as a focal length range (i.e. characteristics of optical system), an aperture (i.e. characteristics of optical system), a distortion coefficient (i.e. errors), and an optical center (i.e. characteristics of optical system)).
Regarding Claim 3, TOSHIHIRO in view of MYOKAN teaches claim 1. TOSHIHIRO further teaches wherein the control unit acquires the distortion information (As illustrated in the flow chart of Fig. 2, the control unit 204 acquires information from the imaging device 101, Par. [0021] the imaging unit 101 (i.e. imaging device) can read, from the mounted lens, specific information of the lens such as a focal length range, an aperture, a distortion coefficient (i.e. distortion information), and an optical center) from a storage unit possessed by of the imaging device (Par. [0022], The imaging unit 101 selectively outputs all or part of visual information according to a parameter set in a storage area (not shown) provided inside the imaging unit 101 (i.e. possessed by of the imaging device)).
Regarding Claim 6, TOSHIHIRO in view of MYOKAN teaches claim 1. TOSHIHIRO further teaches wherein the predetermined space is a work space of a robot (Par. [0008] a three-dimensional position is detected using an imaging unit for a plurality of objects arranged at random positions, and gripping is performed (i.e. work space of robot) using an end effector (gripping means) attached to a robot arm or the like), and the control unit controls the robot based on the corrected depth information (Par. [0009] The gripping unit 102 drives the robot arm 302 and the end effector 303 under the control of the control unit 204 provided inside the information processing apparatus 200).
Method claim 7 is drawn to the method of using the corresponding apparatus claimed in claims 1. Therefore method claim 7 corresponds to apparatus claim 1 and is rejected for the same reasons of obviousness as used above.
Regarding Claim 8, TOSHIHIRO teaches an imaging device (Fig. 1 and Fig. 2, Par. [0009] an imaging unit 101) comprising:
a first optical system comprising a first lens (Fig. 3A, Par. [0012], the imaging unit 101 includes an imaging element 150 inside. A large number of light receiving sections 151 (i.e. first optical system) inside the image sensor 150. Each light receiving unit 151 is provided with a microlens 153 (i.e. first lens) on its upper surface to collect light);
a second optical system comprising a second lens (Fig. 3A, Par. [0012], the imaging unit 101 includes an imaging element 150 inside. A large number of light receiving sections 151 (i.e. second optical system) inside the image sensor 150. Each light receiving unit 151 is provided with a microlens 153 (i.e. second lens) on its upper surface to collect light);
a first imaging element (Par. [0012], each light receiving unit 151 includes a plurality of light receiving elements 152 (i.e. first imaging element)) positioned along the first optical path (Par. [0013], the light receiving element 152a receives the light beam incident (i.e. first optical path) from the right side of the micro lens 153), the first imaging element configured to photograph an image (Par. [0014], the light receiving element 152a to generate the image 154a) of a predetermined space (Par. [0009] the imaging unit 101 captures an environment (i.e. predetermined space) in which the plurality of gripping objects 301 in the supply tray 304 are loaded);
a second imaging element (Par. [0012], each light receiving unit 151 includes a plurality of light receiving elements 152 (i.e. second imaging element)) positioned along the second optical path (Par. [0013], the light receiving element 152 b receives the light beam incident (i.e. second optical path) from the left side of the micro lens 153), the second imaging element configured to photograph an image (Par. [0014], the light receiving element 152b to generate the image 154b) of a predetermined space (Par. [0009] the imaging unit 101 captures an environment (i.e. predetermined space) in which the plurality of gripping objects 301 in the supply tray 304 are loaded);
a storage unit (Par. [0022], The imaging unit 101 selectively outputs all or part of visual information according to a parameter set in a storage area (i.e. storage unit) provided inside the imaging unit 101); and
a control unit (Par. [0017], The imaging unit 101 has a function of outputting (i.e. control unit)), wherein
the storage unit stores as distortion information based on at least one of characteristics of the first optical system and the second optical system (Par. [0021] the imaging unit 101 can read, from the mounted lens (i.e. first optical system 151 and second optical systems 151 each have a first lens 153 and second lens 153), specific information of the lens such as a focal length range (i.e. characteristics of optical systems), an aperture (i.e. characteristics of optical systems), a distortion coefficient (i.e. errors), and an optical center (i.e. characteristics of optical systems)) and errors in the placement of the first imaging element and the second imaging element, and
the control unit generates depth information of the predetermined space based on images obtained by photographing a first measurement point in the first predetermined space with the first imaging element (Par. [0115] t.sub.a be the image coordinates of the gripping target object 301 (i.e. first measurement point) detected in the input image a) and photographing the first measurement point in the second predetermined space with the second imaging element (Par. [0115], t.sub.b be the image coordinate of the same gripped object 301 (i.e. first measurement point) detected in input image b), and outputs the depth information and the distortion information (Par.[0021] The imaging unit 101 outputs the images 154a to 154c and the parallax image, the function of correcting lens distortion of the depth map (i.e. outputs distortion information), the image coordinates of the principal point position (hereinafter referred to as the image center), and the base lengths of the images 154a and 154b. It also has a function of outputting three-dimensional measurement data (i.e. outputs depth information) such as generated images 154a to 154c, optical system data such as focal length and image center, parallax image, baseline length, depth map, and three-dimensional coordinates).
TOSHIHIRO does not specifically teach optical paths of each optical system. Therefore, TOSHIHIRO fails to explicitly teach the first optical system defining a first optical path through the first lens; the second optical system defining a second optical path through the second lens; a first imaging element positioned along the first optical path; a second imaging element positioned along the second optical path.
However, MYOKAN teaches the first optical system defining a first optical path through the first lens; the second optical system defining a second optical path through the second lens (Par. [0076], the left camera 21A (i.e. first optical system) and the right camera 21B (i.e. second optical system) in the direction of rotation about a Z-axis which is an axis in an optical axis (i.e. optical path) direction, respectively, Par. [0050], lens of the left camera 21A and the right camera 21B (i.e. each with lens)).
References TOSHIHIRO and MYOKAN are considered to be analogous art because they relate to imaging systems. Therefore, it would be obvious to one possessing ordinary skill in the art before the effective filing date of the claimed invention to specify optical paths of optical systems as suggest by MYOKAN in the invention of TOSHIHIRO in order that the imaging section may be provided either as sensors or apparatuses independent of each other or as an apparatus obtained by integrating a plurality of sensors or apparatuses (See MYOKAN, Par. [0215]).
Regarding Claim 9, TOSHIHIRO teaches an information processing system (Fig. 1 and Fig. 2) comprising:
an information processing device (Fig. 1, Par. [0009] the information processing apparatus 200); and
an imaging device (Fig. 1 and Fig. 2, Par. [0009] an imaging unit 101),
wherein the imaging device includes
a first optical system comprising a first lens (Fig. 3A, Par. [0012], the imaging unit 101 includes an imaging element 150 inside. A large number of light receiving sections 151 (i.e. first optical system) inside the image sensor 150. Each light receiving unit 151 is provided with a microlens 153 (i.e. first lens) on its upper surface to collect light);
a second optical system comprising a second lens (Fig. 3A, Par. [0012], the imaging unit 101 includes an imaging element 150 inside. A large number of light receiving sections 151 (i.e. second optical system) inside the image sensor 150. Each light receiving unit 151 is provided with a microlens 153 (i.e. second lens) on its upper surface to collect light);
a first imaging element (Par. [0012], each light receiving unit 151 includes a plurality of light receiving elements 152 (i.e. first imaging element)) positioned along the first optical path (Par. [0013], the light receiving element 152a receives the light beam incident (i.e. first optical path) from the right side of the micro lens 153), the first imaging element configured to photograph an image (Par. [0014], the light receiving element 152a to generate the image 154a) of a predetermined space (Par. [0009] the imaging unit 101 captures an environment (i.e. predetermined space) in which the plurality of gripping objects 301 in the supply tray 304 are loaded);
a second imaging element (Par. [0012], each light receiving unit 151 includes a plurality of light receiving elements 152 (i.e. second imaging element)) positioned along the second optical path (Par. [0013], the light receiving element 152 b receives the light beam incident (i.e. second optical path) from the left side of the micro lens 153), the second imaging element configured to photograph an image (Par. [0014], the light receiving element 152b to generate the image 154b) of a predetermined space (Par. [0009] the imaging unit 101 captures an environment (i.e. predetermined space) in which the plurality of gripping objects 301 in the supply tray 304 are loaded);
a storage unit (Par. [0022], The imaging unit 101 selectively outputs all or part of visual information according to a parameter set in a storage area (i.e. storage unit) provided inside the imaging unit 101), and
a control unit (Par. [0017], The imaging unit 101 has a function of outputting (i.e. control unit)),
the storage unit stores as distortion information based on at least one of characteristics of the first optical system and the second optical system (Par. [0021] the imaging unit 101 can read, from the mounted lens (i.e. first optical system 151 and second optical systems 151 each have a first lens 153 and second lens 153), specific information of the lens such as a focal length range (i.e. characteristics of optical systems), an aperture (i.e. characteristics of optical systems), a distortion coefficient (i.e. errors), and an optical center (i.e. characteristics of optical systems)) and errors in the placement of the first imaging element and the second imaging element, and
the control unit generates depth information of the predetermined space based on images obtained by photographing a first measurement point in the first predetermined space with the first imaging element (Par. [0115] t.sub.a be the image coordinates of the gripping target object 301 (i.e. first measurement point) detected in the input image a) and photographing the first measurement point in the second predetermined space with the second imaging element (Par. [0115], t.sub.b be the image coordinate of the same gripped object 301 (i.e. first measurement point) detected in input image b), and outputs the depth information and the distortion information (Par.[0021] The imaging unit 101 outputs the images 154a to 154c and the parallax image, the function of correcting lens distortion of the depth map (i.e. outputs distortion information), the image coordinates of the principal point position (hereinafter referred to as the image center), and the base lengths of the images 154a and 154b. It also has a function of outputting three-dimensional measurement data (i.e. outputs depth information) such as generated images 154a to 154c, optical system data such as focal length and image center, parallax image, baseline length, depth map, and three-dimensional coordinates), and
the information processing device acquires the depth information (As illustrated in the flow chart of Fig. 2, control unit 204 acquires information from detection unit 202, Par. [0008] The detection unit detects the position of the object to be held based on the captured image and the three-dimensional coordinates (i.e. depth information)) and the distortion information from the imaging device (Par. [0021] the imaging unit 101 can read, from the mounted lens, specific information of the lens such as a focal length range, an aperture, a distortion coefficient (i.e. distortion information), and an optical center) and corrects the depth information based on the distortion information to generate corrected depth information (Par. [0021] The read unique information is used for correction of lens distortion of parallax images and depth maps (i.e. correcting of depth information)).
TOSHIHIRO does not specifically teach optical paths of each optical system. Therefore, TOSHIHIRO fails to explicitly teach the first optical system defining a first optical path through the first lens; the second optical system defining a second optical path through the second lens; a first imaging element positioned along the first optical path; a second imaging element positioned along the second optical path.
However, MYOKAN teaches the first optical system defining a first optical path through the first lens; the second optical system defining a second optical path through the second lens (Par. [0076], the left camera 21A (i.e. first optical system) and the right camera 21B (i.e. second optical system) in the direction of rotation about a Z-axis which is an axis in an optical axis (i.e. optical path) direction, respectively, Par. [0050], lens of the left camera 21A and the right camera 21B (i.e. each with lens)).
References TOSHIHIRO and MYOKAN are considered to be analogous art because they relate to imaging systems. Therefore, it would be obvious to one possessing ordinary skill in the art before the effective filing date of the claimed invention to specify optical paths of optical systems as suggest by MYOKAN in the invention of TOSHIHIRO in order that the imaging section may be provided either as sensors or apparatuses independent of each other or as an apparatus obtained by integrating a plurality of sensors or apparatuses (See MYOKAN, Par. [0215]).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over TOSHIHIRO (JP 2019-125056 A), in view of MYOKAN (US 2018/0350107 A1), and in further view of YAMAZAKI et al., (US 2020/0410650 A1) referred to as YAMAZAKI hereinafter.
Regarding Claim 4, TOSHIHIRO in view of MYOKAN teaches claim 1. TOSHIHIRO further teaches wherein the control unit acquires, from the imaging device, where the distortion information is stored (Par. [0021] the imaging unit 101 (i.e. imaging device) can read, from the mounted lens, specific information of the lens such as a focal length range, an aperture, a distortion coefficient (i.e. distortion information), and an optical center).
TOSHIHIRO in view of MYOKAN does not specifically teach an address information. Therefore, TOSHIHIRO in view of MYOKAN fails to explicitly teach the control unit acquires, from the imaging device, address information that specifies the location where the distortion information is stored, and acquires the distortion information based on the address information.
However, YAMAZAKI teaches wherein the control unit acquires, from the imaging device (Fig. 1, Par. [0054] The control section 23 (i.e. control unit) performs various processes by using the captured image (i.e. acquires from the imaging device). For example, the control section 23 displays, processes, records, and transmits the captured image, and performs an objection recognition process and a distance measurement process by using the captured image), address information that specifies the location where the distortion information is stored (Fig. 2, Par. [0240], information indicative of the result of estimation by the distortion estimation section 51 is supplied to the distortion correction section 53, for example, through a network (i.e. address information) or a storage medium), and acquires the distortion information based on the address information (Par. [0240], the distortion estimation section 51 and the distortion correction section 53 may be separately disposed in different apparatuses. Moreover, for example, the distortion correction section 53 may be disposed in the imaging section 21 so as to let the imaging section 21 correct lens distortion (i.e. acquire information) and transmissive body distortion).
References TOSHIHIRO, MYOKAN and YAMAZAKI are considered to be analogous art because they relate to imaging systems. Therefore, it would be obvious to one possessing ordinary skill in the art before the effective filing date of the claimed invention to specifying address location of distortion information as suggest by YAMAZAKI in the inventions of TOSHIHIRO and MYOKAN. This modification would allow the signal processing section to be disposed in a single apparatus or separately disposed in a plurality of apparatuses through a network or storage medium (See YAMAZAKI, Par. [0240]).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over TOSHIHIRO (JP 2019-125056 A), in view of MYOKAN (US 2018/0350107 A1), and in further view of Kojima (US 2015/0022726 A1) referred to as Kojima hereinafter.
Regarding Claim 5, TOSHIHIRO in view of MYOKAN teaches claim 1. TOSHIHIRO further teaches wherein the control unit acquires an image of the predetermined space from the imaging device (Fig. 1, Fig. 2, Par. [0024], The input unit 201 captures visual information output from the imaging unit 101 into the information processing apparatus 200. In the present embodiment, the visual information includes the image 154c (i.e. an image) captured by the imaging unit 101) and corrects the image of the predetermined space based on the corrected depth information (Par. [0085] The three-dimensional coordinate calculation unit calculates three-dimensional coordinates based on the input image and the parallax image, and then corrects the three-dimensional coordinates using the distance between the markers).
TOSHIHIRO in view of MYOKAN does not specifically teach corrects peripheral illumination. Therefore, TOSHIHIRO in view of MYOKAN fails to explicitly teach the control unit corrects peripheral illumination of the image of the predetermined space based on the corrected depth information.
However, Kojima teaches the control unit (Fig. 11, Par. [0074], CPU 1101 controlling the operation of the entire computer) corrects peripheral illumination of the image (Par. [0056], luminance values at pixel positions in the projection video corresponding to the respective positions arranged in the defined region at an equal interval are obtained from luminance values at peripheral pixel positions around the pixel positions in the input video corresponding to the positions) of the predetermined space based on the corrected depth information (Par. [0027], A luminance correction unit 105 corrects luminance components (i.e. corrects peripheral illumination) in the projection video generated by the distortion correction unit 103 using a reflection rate calculated by processing (to be described later) of a reflection rate calculation unit 104).
References TOSHIHIRO, MYOKAN and Kojima are considered to be analogous art because they relate to imaging systems. Therefore, it would be obvious to one possessing ordinary skill in the art before the effective filing date of the claimed invention to specifying correcting peripheral illumination of an image as suggest by Kojima in the inventions of TOSHIHIRO and MYOKAN. This modification would allow a brightness adjustment in addition to distortion correction, since brightness varies because of the difference in the incident angle of the video with respect to the projection plane (See Kojima, Par. [0005]).
Conclusion
The prior art references made of record are not relied upon but are considered pertinent to applicant's disclosure. TERANISHI (US 2022/0295037 A1) teaches correcting distortion remaining after camera assembly and misalignment between the camera module and deriving a depth between the stereo camera and an object using the disparity. Miyata (US 2021/0174531 A1) teaches a three-dimensional (3D) geometry measurement apparatus and method that calculates a correction value for correcting the target measurement data in order to prevent an occurrence of inconsistency in measurement results when a geometry of an object to be measured is measured using different optical devices.
Any inquiry concerning this communication should be directed to SUSAN E HODGES whose telephone number is (571)270-0498. The Examiner can normally be reached on Monday - Friday from 8:00 am (EST) to 4:00 pm (EST).
If attempts to reach the Examiner by telephone are unsuccessful, the Examiner's supervisor, Brian T. Pendleton, can be reached on (571) . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://portal.uspto.gov/external/portal. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/Susan E. Hodges/Primary Examiner, Art Unit 2425