Prosecution Insights
Last updated: April 19, 2026
Application No. 18/866,166

METHOD, COMPUTER PROGRAM, AND DEVICE FOR ALIGNING CAMERAS

Non-Final OA §103
Filed
Nov 15, 2024
Examiner
DAGNEW, MEKONNEN D
Art Unit
2638
Tech Center
2600 — Communications
Assignee
Isra Vision GmbH
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
604 granted / 728 resolved
+21.0% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
29 currently pending
Career history
757
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 728 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 16-29 are rejected under 35 U.S.C. 103 as being unpatentable over MOULE et al. (US 20170039756 A1; hereafter MOULE) in view of Shikata (US 20210044738 A1). As of Claim 23: MOULE teaches a device (¶0049 and exhibit a 3D (“three-dimensional”) rendering device 101 (interchangeably referred to hereafter as device 101)) for providing an orientation of a predetermined number of real cameras for measuring a real three-dimensional object using the real cameras and a pattern in a predetermined coordinate system (¶¶0056,0057,0071 and note FIG. 2 which depicts a system 200 for automatic alignment and projection mapping. Indeed, alignment system 105 can comprise system 200, and furthermore components of system 100 can comprise components of system 200 as desired. System 200 comprises a computing device 201, a projector 207; and at least two cameras 214-1, 214-2 (interchangeably referred to hereafter, collectively, as cameras 214 and, generically, as a camera 214), each of projector 207 and at least two cameras 214 mounted relative to a three-dimensional environment 215 with respective fields of view of cameras 214 at least partially overlapping a projection area of projector 207 on three-dimensional environment 215. In particular, each of cameras 214 is aimed at environment 215 at different positions and/or different orientations, such that cameras 214, taken together, are viewing environment 215), wherein for a measurement the pattern is configured to be projected onto a surface of the real three-dimensional object with at least one projector and configured to be recorded (¶¶0062-0065 and note a respective location and respective orientation of physical object 216 relative to a physical origin of the three-dimensional environment 215 by comparing virtual model 240 to the cloud of points; set a virtual location and virtual orientation of the virtual model 240 in the virtual three-dimensional environment with respect to a virtual origin, related to the physical origin, to match the respective location and the respective orientation of the physical object 216; and control projector 207 to illuminate physical object 216 with images adjusted for the virtual location and virtual orientation of the virtual model 240.), at least sectionwise, with each real camera, wherein a three-dimensional virtual model of an ideal object corresponding to the real object exists (¶¶0063-0065), wherein A) the at least one projector is configured to be aligned on the surface of the real object in such a way that a position of the pattern corresponds to a predetermined position of the pattern on the virtual model (¶¶0062-0067). Shikata is a similar or analogous system to the claimed invention as evidenced Shikata teaches a system and method for automatic alignment and projection mapping that would have prompted a predictable variation of MOULE by applying Shikata’s known principal of the device comprises a computing unit configured to: B) define or provide a plurality of target marks on the surface of the ideal object in the three-dimensional virtual model (¶¶0042, 0062-0063 and note FIG. 11, a person 1101 exemplifies the moving object to which the board on which the marker patterns is printed or the like is assigned. Other than the case of the moving object holing the board, the marker pattern may be projected on the field by, for example, a projector. ), C) automatically determining reference information for each camera, wherein the reference information comprises identification information that includes information as to which at least one target mark of the plurality of target marks the respective camera captures when the respective camera and the projector of the pattern are arranged and oriented in a predetermined manner (¶¶0049,0052-0055 and note Device 101 can generate rendered image data 110 from pose data 109p, for example by rendering existing image data (not depicted) for projection by projector 107. In FIG. 1, solid lines connecting components show flow of image and/or video data there between, while the stippled line connecting alignment system 105 to device 101 and/or device 108 shows flow of pose data 109p and object data 1090 there between. Pose data 109p can also be referred to as calibration data as pose data 109p represents a calibration of system 100 to account for a position of projector 107 and/or positions of objects upon which images are to be projected. Object data 1090 generally comprises a virtual location and virtual orientation of a virtual model of an object in a virtual three-dimensional environment, with respect to a virtual origin, that corresponds to a physical three-dimensional environment where the object is located.), and the reference information also comprises location information associated with the respective camera and target mark, which includes the information where in a captured image of a field of view of the respective camera the at least one target mark appears when the respective camera and the at least one projector of the pattern are arranged and oriented in the predetermined manner, wherein the reference information for each camera is determined based on the three- dimensional virtual model of the ideal object and corresponding virtual representations of the cameras and the pattern, and D) recording an image of the surface of a real object with a real camera, automatically determining indications, or control information, or both for finding a specified target mark in the image of the respective real camera and for orienting the respective real camera with a real object (¶¶0049,0052-0055) based on the determined reference information for the respective camera and automatically providing the indications, or control information, or both at a predetermined interface,wherein the computing unit is configured to perform step D) for each camera of the predetermined number of cameras (¶¶0049,0052-0055 and note IG. 1 depicts a system 100 comprising: a 3D (“three-dimensional”) rendering device 101 (interchangeably referred to hereafter as device 101); a content player 103; an alignment system 105; and a projector 107. In general, device 101 is in communication with content player 103 and alignment system 105, and content player 103 is in communication with projector 107. As depicted, device 101 and content player 103 are combined into one device 108, however in other implementations device 101 and content player 103 are separate devices. Alignment system is configured to generate pose data 109p comprising a virtual location, a virtual orientation and virtual lens characteristics of a virtual camera corresponding to projector 107, and communicate pose data 109p to device 101) In view of the motivations such as thereby further improving image quality one of ordinary skill in the art would have implemented the claimed variation of the prior art system of MOULE. Therefore, the claimed invention would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. As of Claim 24: MOULE in view of Shikata further teaches the computing unit is configured to search with each real camera and the real object, using a real pattern projected onto the surface of the real object (Shikata ¶¶0042, 0062-0063), for the at least one target mark on the surface of the real object in accordance with the identification information in the image of the respective camera and subsequently to calculate the indications, or control information, or both for finding the at least one target mark (MOULE ¶¶0062,0065). As of Claim 25: MOULE in view of Shikata further teaches the computing unit is further configured to determine indications, or control information, or both for orienting the respective real camera by comparing the identification information and at least one target mark visible in the image of the respective real camera (Shikata ¶¶0042, 0062-0063), or by comparing the determined location information of the at least one target mark for the respective camera with the position of the at least one target mark in the image of the respective real camera, or by both (MOULE ¶¶0070,0071 and note that a known target pattern 641. Eeach of cameras 214 capture a respective image 650-1, 650-2 of known target pattern 641 which are received at device 201 (e.g. at block 301). Processor 220 compares each of images 650-1, 650-2 to representation 642 to (e.g. at block 303, as represented by the stippled lines around images 650-1, 650-2 and representation 642 within processor 220 in FIG. 6) to determine (e.g. at block 307) a respective given position 670-1, 670-2 of each of cameras 214-1, 214-2, as well as an origin 680 of environment 215). As of Claim 26: MOULE in view of Shikata further teaches the computing unit is further configured to determine a quality measure for a deviation of the current orientation of the respective real camera from the reference information determined for the respective camera and to make the quality measure available as an indication, or control information, or both at the predetermined interface (MOULE ¶¶0100,0101-0104). As of Claim 27: MOULE in view of Shikata further teaches a data of the three- dimensional virtual model of the ideal object is CAD data, or wherein the pattern is a two-dimensionally coded pattern, or both (Shikata ¶¶0066, 0095-0096). As of Claim 28 : MOULE in view of Shikata further teaches the device further comprises at least one auxiliary camera (Shikata ¶0092). As of Claim 29: MOULE in view of Shikata further teaches the interface is connectable to a display, or to a loudspeaker, or to a control device for a plurality of motors, or a combination thereof, wherein the indications, or control information, or both are transmitted to the display, or to the loudspeaker, or to the control device, or to a combination thereof (Shikata ¶¶0033,0039,0043 and note the images and the photographing parameters stored in the storage unit 331 can be displayed on the display device of the UI unit 260. In a case where the images captured at the time of focus processing or the like executed in the image-capturing apparatuses 101 to 110 and the information about the photographing parameters are displayed on the display device of the UI unit 260). As of Claim 22: MOULE in Shikata further view of teaches a non-transitory computer readable medium comprising a computer program comprising instructions configured to perform the method of claim 16 when the computer program is executed on a computer (MOULE ¶0061). As of Claims 16-21: Claims 16-21 are method claims for Claims 23-29 and are addressed above. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over MOULE et al. (US 20170039756 A1; hereafter MOULE) in view of Shikata (US 20210044738 A1), and further in view of Boyle et al. (US 20130162852 A1; hereafter Boyle). As of Claim 30 : Boyle is a similar or analogous system to the claimed invention as evidenced Boyle teaches provides a portable system that automatically records videos from the vantage point of a sports fan, of a spectator or of a competition judge that would have prompted a predictable variation of MOULE by applying Boyle’s known principal of the a system comprising: the device according to the device according toa display (¶¶0070,0117 and note remote device 16 is equipped with a display. Pictures of footage taken by the camera will be shown on the display in real time. Further, the remote device may have controls that cause camera 46 to turn in different directions. The user, after putting sufficient distance between him or her and camera 46 (step 160), may direct the camera 46 to turn until he or she is found properly centered in the picture or footage displayed on the remote device ) and optionally any suitable combination of input devices and display devices.), or a loudspeaker, or a control device for a plurality of motors, or a combination thereof, wherein an interface is connected to the display, or to the loudspeaker, or to the control device, or to a combination thereof, and wherein the indications, or control information, or both provided at the interface are configured to be transmitted to the display, or to the loudspeaker (¶0070 and note speakers used), or to the control device, or to a combination thereof, wherein the indications, or control information, or both are processed by the display , or by the loudspeaker, or by the control device, or by a combination thereof, wherein the indications are output on the display, or on the loudspeaker, or the control information is transmitted by the control device to the motors, or a combination thereof in such a way that the real cameras are automatically oriented in accordance with the determined reference information of the real cameras. (¶¶0024,0035,0080,0088,0089 and note a motor assembly such as the one depicted in FIG. 10 and it automatically orients a directional device, such as a camera ) In view of the motivations such as improving precision of location determination when using GPS for determining camera orientation thereby further improving he camera's field of view is preferably set or controlled, automatically or otherwise, based, in part, on consideration of the known orientation precision of the automatic recording system one of ordinary skill in the art would have implemented the claimed variation of the prior art system of MOULE. Therefore, the claimed invention would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEKONNEN D DAGNEW whose telephone number is (571)270-5092. The examiner can normally be reached on 8:00AM-5:00PM M-Th. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEKONNEN D DAGNEW/Primary Examiner, Art Unit 2638
Read full office action

Prosecution Timeline

Nov 15, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593143
SOLID-STATE IMAGING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586142
IMAGE CAPTURING METHOD AND DISPLAY METHOD FOR RECOGNIZING A RELATIONSHIP AMONG A PLURALITY OF IMAGES DISPLAYED ON A DISPLAY SCREEN
2y 5m to grant Granted Mar 24, 2026
Patent 12585173
LENS BARREL
2y 5m to grant Granted Mar 24, 2026
Patent 12581022
DATA CREATION METHOD AND DATA CREATION PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12574662
THRESHOLD VALUE DETERMINATION METHOD, THRESHOLD VALUE DETERMINATION PROGRAM, THRESHOLD VALUE DETERMINATION DEVICE, PHOTON NUMBER IDENTIFICATION SYSTEM, PHOTON NUMBER IDENTIFICATION METHOD, AND PHOTON NUMBER IDENTIFICATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+15.8%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 728 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month